WorldWideScience

Sample records for 3d image guided

  1. 3D Image-Guided Automatic Pipette Positioning for Single Cell Experiments in vivo

    OpenAIRE

    Brian Long; Lu Li; Ulf Knoblich; Hongkui Zeng; Hanchuan Peng

    2015-01-01

    We report a method to facilitate single cell, image-guided experiments including in vivo electrophysiology and electroporation. Our method combines 3D image data acquisition, visualization and on-line image analysis with precise control of physical probes such as electrophysiology microelectrodes in brain tissue in vivo. Adaptive pipette positioning provides a platform for future advances in automated, single cell in vivo experiments.

  2. Hands-on guide for 3D image creation for geological purposes

    Science.gov (United States)

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    -cyan anaglyphs is their simplicity and the possibility to print them on normal paper or project them using a conventional projector. Producing 3D stereoscopic images is much easier than commonly thought. Our hands-on poster provides an easy-to-use guide for producing 3D stereoscopic images. Few simple rules-of-thumb are presented that define how photographs of any scene or object have to be shot to produce good-looking 3D images. We use the free software Stereophotomaker (http://stereo.jpn.org/eng/stphmkr) to produce anaglyphs and provide red-cyan 3D glasses for viewing them. Our hands-on poster is easy to adapt and helps any geologist to present his/her field or hand specimen photographs in a much more fashionable 3D way for future publications or conference posters.

  3. A 3-D visualization method for image-guided brain surgery.

    Science.gov (United States)

    Bourbakis, N G; Awad, M

    2003-01-01

    This paper deals with a 3D methodology for brain tumor image-guided surgery. The methodology is based on development of a visualization process that mimics the human surgeon behavior and decision-making. In particular, it originally constructs a 3D representation of a tumor by using the segmented version of the 2D MRI images. Then it develops an optimal path for the tumor extraction based on minimizing the surgical effort and penetration area. A cost function, incorporated in this process, minimizes the damage surrounding healthy tissues taking into consideration the constraints of a new snake-like surgical tool proposed here. The tumor extraction method presented in this paper is compared with the ordinary method used on brain surgery, which is based on a straight-line based surgical tool. Illustrative examples based on real simulations present the advantages of the 3D methodology proposed here.

  4. 3D-image-guided high-dose-rate intracavitary brachytherapy for salvage treatment of locally persistent nasopharyngeal carcinoma

    OpenAIRE

    Ren, Yu-Feng; Cao, Xin-Ping; Xu, Jia; Ye, Wei-Jun; Gao, Yuan-Hong; Teh, Bin S.; Wen, Bi-Xiu

    2013-01-01

    Background To evaluate the therapeutic benefit of 3D-image-guided high-dose-rate intracavitary brachytherapy (3D-image-guided HDR-BT) used as a salvage treatment of intensity modulated radiation therapy (IMRT) in patients with locally persistent nasopharyngeal carcinoma (NPC). Methods Thirty-two patients with locally persistent NPC after full dose of IMRT were evaluated retrospectively. 3D-image-guided HDR-BT treatment plan was performed on a 3D treatment planning system (PLATO BPS 14.2). The...

  5. A small animal image guided irradiation system study using 3D dosimeters

    Science.gov (United States)

    Qian, Xin; Admovics, John; Wuu, Cheng-Shie

    2015-01-01

    In a high resolution image-guided small animal irradiation platform, a cone beam computed tomography (CBCT) is integrated with an irradiation unit for precise targeting. Precise quality assurance is essential for both imaging and irradiation components. The conventional commissioning techniques with films face major challenges due to alignment uncertainty and labour intensive film preparation and scanning. In addition, due to the novel design of this platform the mouse stage rotation for CBCT imaging is perpendicular to the gantry rotation for irradiation. Because these two rotations are associated with different mechanical systems, discrepancy between rotation isocenters exists. In order to deliver x-ray precisely, it is essential to verify coincidence of the imaging and the irradiation isocenters. A 3D PRESAGE dosimeter can provide an excellent tool for checking dosimetry and verifying coincidence of irradiation and imaging coordinates in one system. Dosimetric measurements were performed to obtain beam profiles and percent depth dose (PDD). Isocentricity and coincidence of the mouse stage and gantry rotations were evaluated with starshots acquired using PRESAGE dosimeters. A single PRESAGE dosimeter can provide 3 -D information in both geometric and dosimetric uncertainty, which is crucial for translational studies.

  6. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    Science.gov (United States)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0–1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  7. Image-driven, model-based 3D abdominal motion estimation for MR-guided radiotherapy

    Science.gov (United States)

    Stemkens, Bjorn; Tijssen, Rob H. N.; de Senneville, Baudouin Denis; Lagendijk, Jan J. W.; van den Berg, Cornelis A. T.

    2016-07-01

    Respiratory motion introduces substantial uncertainties in abdominal radiotherapy for which traditionally large margins are used. The MR-Linac will open up the opportunity to acquire high resolution MR images just prior to radiation and during treatment. However, volumetric MRI time series are not able to characterize 3D tumor and organ-at-risk motion with sufficient temporal resolution. In this study we propose a method to estimate 3D deformation vector fields (DVFs) with high spatial and temporal resolution based on fast 2D imaging and a subject-specific motion model based on respiratory correlated MRI. In a pre-beam phase, a retrospectively sorted 4D-MRI is acquired, from which the motion is parameterized using a principal component analysis. This motion model is used in combination with fast 2D cine-MR images, which are acquired during radiation, to generate full field-of-view 3D DVFs with a temporal resolution of 476 ms. The geometrical accuracies of the input data (4D-MRI and 2D multi-slice acquisitions) and the fitting procedure were determined using an MR-compatible motion phantom and found to be 1.0-1.5 mm on average. The framework was tested on seven healthy volunteers for both the pancreas and the kidney. The calculated motion was independently validated using one of the 2D slices, with an average error of 1.45 mm. The calculated 3D DVFs can be used retrospectively for treatment simulations, plan evaluations, or to determine the accumulated dose for both the tumor and organs-at-risk on a subject-specific basis in MR-guided radiotherapy.

  8. 3D-image-guided high-dose-rate intracavitary brachytherapy for salvage treatment of locally persistent nasopharyngeal carcinoma

    International Nuclear Information System (INIS)

    To evaluate the therapeutic benefit of 3D-image-guided high-dose-rate intracavitary brachytherapy (3D-image-guided HDR-BT) used as a salvage treatment of intensity modulated radiation therapy (IMRT) in patients with locally persistent nasopharyngeal carcinoma (NPC). Thirty-two patients with locally persistent NPC after full dose of IMRT were evaluated retrospectively. 3D-image-guided HDR-BT treatment plan was performed on a 3D treatment planning system (PLATO BPS 14.2). The median dose of 16 Gy was delivered to the 100% isodose line of the Gross Tumor Volume. The whole procedure was well tolerated under local anesthesia. The actuarial 5-y local control rate for 3D-image-guided HDR-BT was 93.8%, patients with early-T stage at initial diagnosis had 100% local control rate. The 5-y actuarial progression-free survival and distant metastasis-free survival rate were 78.1%, 87.5%. One patient developed and died of lung metastases. The 5-y actuarial overall survival rate was 96.9%. Our results showed that 3D-image-guided HDR-BT would provide excellent local control as a salvage therapeutic modality to IMRT for patients with locally persistent disease at initial diagnosis of early-T stage NPC

  9. Automatic masking for robust 3D-2D image registration in image-guided spine surgery

    Science.gov (United States)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-03-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  10. Optimizing nonrigid registration performance between volumetric true 3D ultrasound images in image-guided neurosurgery

    Science.gov (United States)

    Ji, Songbai; Fan, Xiaoyao; Roberts, David W.; Hartov, Alex; Paulsen, Keith D.

    2011-03-01

    Compensating for brain shift as surgery progresses is important to ensure sufficient accuracy in patient-to-image registration in the operating room (OR) for reliable neuronavigation. Ultrasound has emerged as an important and practical imaging technique for brain shift compensation either by itself or through computational modeling that estimates whole-brain deformation. Using volumetric true 3D ultrasound (3DUS), it is possible to nonrigidly (e.g., based on B-splines) register two temporally different 3DUS images directly to generate feature displacement maps for data assimilation in the biomechanical model. Because of a large amount of data and number of degrees-of-freedom (DOFs) involved, however, a significant computational cost may be required that can adversely influence the clinical feasibility of the technique for efficiently generating model-updated MR (uMR) in the OR. This paper parametrically investigates three B-splines registration parameters and their influence on the computational cost and registration accuracy: number of grid nodes along each direction, floating image volume down-sampling rate, and number of iterations. A simulated rigid body displacement field was employed as a ground-truth against which the accuracy of displacements generated from the B-splines nonrigid registration was compared. A set of optimal parameters was then determined empirically that result in a registration computational cost of less than 1 min and a sub-millimetric accuracy in displacement measurement. These resulting parameters were further applied to a clinical surgery case to demonstrate their practical use. Our results indicate that the optimal set of parameters result in sufficient accuracy and computational efficiency in model computation, which is important for future application of the overall biomechanical modeling to generate uMR for image-guidance in the OR.

  11. Technical Note: Rapid prototyping of 3D grid arrays for image guided therapy quality assurance

    Energy Technology Data Exchange (ETDEWEB)

    Kittle, David; Holshouser, Barbara; Slater, James M.; Guenther, Bob D.; Pitsianis, Nikos P.; Pearlstein, Robert D. [Department of Radiation Medicine, Epilepsy Radiosurgery Research Program, Loma Linda University, Loma Linda, California 92354 (United States); Department of Radiology, Loma Linda University Medical Center, Loma Linda, California 92354 (United States); Department of Radiation Medicine, Loma Linda University, Loma Linda, California 92354 (United States); Department of Physics, Duke University, Durham, North Carolina 27708 (United States); Department of Electrical and Computer Engineering and Department of Computer Science, Duke University, Durham, North Carolina 27708 (United States); Department of Radiation Medicine, Epilepsy Radiosurgery Research Program, Loma Linda University, Loma Linda, California 92354 and Department of Surgery-Neurosurgery, Duke University and Medical Center, Durham, North Carolina 27710 (United States)

    2008-12-15

    Three dimensional grid phantoms offer a number of advantages for measuring imaging related spatial inaccuracies for image guided surgery and radiotherapy. The authors examined the use of rapid prototyping technology for directly fabricating 3D grid phantoms from CAD drawings. We tested three different fabrication process materials, photopolymer jet with acrylic resin (PJ/AR), selective laser sintering with polyamide (SLS/P), and fused deposition modeling with acrylonitrile butadiene styrene (FDM/ABS). The test objects consisted of rectangular arrays of control points formed by the intersections of posts and struts (2 mm rectangular cross section) and spaced 8 mm apart in the x, y, and z directions. The PJ/AR phantom expanded after immersion in water which resulted in permanent warping of the structure. The surface of the FDM/ABS grid exhibited a regular pattern of depressions and ridges from the extrusion process. SLS/P showed the best combination of build accuracy, surface finish, and stability. Based on these findings, a grid phantom for assessing machine-dependent and frame-induced MR spatial distortions was fabricated to be used for quality assurance in stereotactic neurosurgical and radiotherapy procedures. The spatial uniformity of the SLS/P grid control point array was determined by CT imaging (0.6x0.6x0.625 mm{sup 3} resolution) and found suitable for the application, with over 97.5% of the control points located within 0.3 mm of the position specified in CAD drawing and none of the points off by more than 0.4 mm. Rapid prototyping is a flexible and cost effective alternative for development of customized grid phantoms for medical physics quality assurance.

  12. 2D-3D registration using gradient-based MI for image guided surgery systems

    Science.gov (United States)

    Yim, Yeny; Chen, Xuanyi; Wakid, Mike; Bielamowicz, Steve; Hahn, James

    2011-03-01

    Registration of preoperative CT data to intra-operative video images is necessary not only to compare the outcome of the vocal fold after surgery with the preplanned shape but also to provide the image guidance for fusion of all imaging modalities. We propose a 2D-3D registration method using gradient-based mutual information. The 3D CT scan is aligned to 2D endoscopic images by finding the corresponding viewpoint between the real camera for endoscopic images and the virtual camera for CT scans. Even though mutual information has been successfully used to register different imaging modalities, it is difficult to robustly register the CT rendered image to the endoscopic image due to varying light patterns and shape of the vocal fold. The proposed method calculates the mutual information in the gradient images as well as original images, assigning more weight to the high gradient regions. The proposed method can emphasize the effect of vocal fold and allow a robust matching regardless of the surface illumination. To find the viewpoint with maximum mutual information, a downhill simplex method is applied in a conditional multi-resolution scheme which leads to a less-sensitive result to local maxima. To validate the registration accuracy, we evaluated the sensitivity to initial viewpoint of preoperative CT. Experimental results showed that gradient-based mutual information provided robust matching not only for two identical images with different viewpoints but also for different images acquired before and after surgery. The results also showed that conditional multi-resolution scheme led to a more accurate registration than single-resolution.

  13. A questionnaire-based survey on 3D image-guided brachytherapy for cervical cancer in Japan. Advances and obstacles

    International Nuclear Information System (INIS)

    The purpose of this study is to survey the current patterns of practice, and barriers to implementation, of 3D image-guided brachytherapy (3D-IGBT) for cervical cancer in Japan. A 30-item questionnaire was sent to 171 Japanese facilities where high-dose-rate brachytherapy devices were available in 2012. In total, 135 responses were returned for analysis. Fifty-one facilities had acquired some sort of 3D imaging modality with applicator insertion, and computed tomography (CT) and magnetic resonance imaging (MRI) were used in 51 and 3 of the facilities, respectively. For actual treatment planning, X-ray films, CT and MRI were used in 113, 20 and 2 facilities, respectively. Among 43 facilities where X-ray films and CT or MRI were acquired with an applicator, 29 still used X-ray films for actual treatment planning, mainly because of limited time and/or staffing. In a follow-up survey 2.5 years later, respondents included 38 facilities that originally used X-ray films alone but had indicated plans to adopt 3D-IGBT. Of these, 21 had indeed adopted CT imaging with applicator insertion. In conclusion, 3D-IGBT (mainly CT) was implemented in 22 facilities (16%) and will be installed in 72 (53%) facilities in the future. Limited time and staffing were major impediments. (author)

  14. A questionnaire-based survey on 3D image-guided brachytherapy for cervical cancer in Japan: advances and obstacles.

    Science.gov (United States)

    Ohno, Tatsuya; Toita, Takafumi; Tsujino, Kayoko; Uchida, Nobue; Hatano, Kazuo; Nishimura, Tetsuo; Ishikura, Satoshi

    2015-11-01

    The purpose of this study is to survey the current patterns of practice, and barriers to implementation, of 3D image-guided brachytherapy (3D-IGBT) for cervical cancer in Japan. A 30-item questionnaire was sent to 171 Japanese facilities where high-dose-rate brachytherapy devices were available in 2012. In total, 135 responses were returned for analysis. Fifty-one facilities had acquired some sort of 3D imaging modality with applicator insertion, and computed tomography (CT) and magnetic resonance imaging (MRI) were used in 51 and 3 of the facilities, respectively. For actual treatment planning, X-ray films, CT and MRI were used in 113, 20 and 2 facilities, respectively. Among 43 facilities where X-ray films and CT or MRI were acquired with an applicator, 29 still used X-ray films for actual treatment planning, mainly because of limited time and/or staffing. In a follow-up survey 2.5 years later, respondents included 38 facilities that originally used X-ray films alone but had indicated plans to adopt 3D-IGBT. Of these, 21 had indeed adopted CT imaging with applicator insertion. In conclusion, 3D-IGBT (mainly CT) was implemented in 22 facilities (16%) and will be installed in 72 (53%) facilities in the future. Limited time and staffing were major impediments. PMID:26265660

  15. Treatment Planning for Image-Guided Neuro-Vascular Interventions Using Patient-Specific 3D Printed Phantoms

    OpenAIRE

    Russ, M; O’Hara, R.; Setlur Nagesh, S.V.; Mokin, M.; Jimenez, C.; Siddiqui, A.; Bednarek, D.; Rudin, S; C. Ionita

    2015-01-01

    Minimally invasive endovascular image-guided interventions (EIGIs) are the preferred procedures for treatment of a wide range of vascular disorders. Despite benefits including reduced trauma and recovery time, EIGIs have their own challenges. Remote catheter actuation and challenging anatomical morphology may lead to erroneous endovascular device selections, delays or even complications such as vessel injury. EIGI planning using 3D phantoms would allow interventionists to become familiarized ...

  16. Curve-based 2D-3D registration of coronary vessels for image guided procedure

    Science.gov (United States)

    Duong, Luc; Liao, Rui; Sundar, Hari; Tailhades, Benoit; Meyer, Andreas; Xu, Chenyang

    2009-02-01

    3D roadmap provided by pre-operative volumetric data that is aligned with fluoroscopy helps visualization and navigation in Interventional Cardiology (IC), especially when contrast agent-injection used to highlight coronary vessels cannot be systematically used during the whole procedure, or when there is low visibility in fluoroscopy for partially or totally occluded vessels. The main contribution of this work is to register pre-operative volumetric data with intraoperative fluoroscopy for specific vessel(s) occurring during the procedure, even without contrast agent injection, to provide a useful 3D roadmap. In addition, this study incorporates automatic ECG gating for cardiac motion. Respiratory motion is identified by rigid body registration of the vessels. The coronary vessels are first segmented from a multislice computed tomography (MSCT) volume and correspondent vessel segments are identified on a single gated 2D fluoroscopic frame. Registration can be explicitly constrained using one or multiple branches of a contrast-enhanced vessel tree or the outline of guide wire used to navigate during the procedure. Finally, the alignment problem is solved by Iterative Closest Point (ICP) algorithm. To be computationally efficient, a distance transform is computed from the 2D identification of each vessel such that distance is zero on the centerline of the vessel and increases away from the centerline. Quantitative results were obtained by comparing the registration of random poses and a ground truth alignment for 5 datasets. We conclude that the proposed method is promising for accurate 2D-3D registration, even for difficult cases of occluded vessel without injection of contrast agent.

  17. SU-E-T-376: 3-D Commissioning for An Image-Guided Small Animal Micro- Irradiation Platform

    International Nuclear Information System (INIS)

    Purpose: A 3-D radiochromic plastic dosimeter has been used to cross-test the isocentricity of a high resolution image-guided small animal microirradiation platform. In this platform, the mouse stage rotating for cone beam CT imaging is perpendicular to the gantry rotation for sub-millimeter radiation delivery. A 3-D dosimeter can be used to verify both imaging and irradiation coordinates. Methods: A 3-D dosimeter and optical CT scanner were used in this study. In the platform, both mouse stage and gantry can rotate 360° with rotation axis perpendicular to each other. Isocentricity and coincidence of mouse stage and gantry rotations were evaluated using star patterns. A 3-D dosimeter was placed on mouse stage with center at platform isocenter approximately. For CBCT isocentricity, with gantry moved to 90°, the mouse stage rotated horizontally while the x-ray was delivered to the dosimeter at certain angles. For irradiation isocentricity, the gantry rotated 360° to deliver beams to the dosimeter at certain angles for star patterns. The uncertainties and agreement of both CBCT and irradiation isocenters can be determined from the star patterns. Both procedures were repeated 3 times using 3 dosimeters to determine short-term reproducibility. Finally, dosimeters were scanned using optical CT scanner to obtain the results. Results: The gantry isocentricity is 0.9 ± 0.1 mm and mouse stage rotation isocentricity is about 0.91 ± 0.11 mm. Agreement between the measured isocenters of irradiation and imaging coordinates was determined. The short-term reproducibility test yielded 0.5 ± 0.1 mm between the imaging isocenter and the irradiation isocenter, with a maximum displacement of 0.7 ± 0.1 mm. Conclusion: The 3-D dosimeter can be very useful in precise verification of targeting for a small animal irradiation research. In addition, a single 3-D dosimeter can provide information in both geometric and dosimetric uncertainty, which is crucial for translational studies

  18. SU-E-T-376: 3-D Commissioning for An Image-Guided Small Animal Micro- Irradiation Platform

    Energy Technology Data Exchange (ETDEWEB)

    Qian, X; Wuu, C [Columbia University, NY, NY (United States); Admovics, J [Rider University, Lawrencsville, NJ (United States)

    2014-06-01

    Purpose: A 3-D radiochromic plastic dosimeter has been used to cross-test the isocentricity of a high resolution image-guided small animal microirradiation platform. In this platform, the mouse stage rotating for cone beam CT imaging is perpendicular to the gantry rotation for sub-millimeter radiation delivery. A 3-D dosimeter can be used to verify both imaging and irradiation coordinates. Methods: A 3-D dosimeter and optical CT scanner were used in this study. In the platform, both mouse stage and gantry can rotate 360° with rotation axis perpendicular to each other. Isocentricity and coincidence of mouse stage and gantry rotations were evaluated using star patterns. A 3-D dosimeter was placed on mouse stage with center at platform isocenter approximately. For CBCT isocentricity, with gantry moved to 90°, the mouse stage rotated horizontally while the x-ray was delivered to the dosimeter at certain angles. For irradiation isocentricity, the gantry rotated 360° to deliver beams to the dosimeter at certain angles for star patterns. The uncertainties and agreement of both CBCT and irradiation isocenters can be determined from the star patterns. Both procedures were repeated 3 times using 3 dosimeters to determine short-term reproducibility. Finally, dosimeters were scanned using optical CT scanner to obtain the results. Results: The gantry isocentricity is 0.9 ± 0.1 mm and mouse stage rotation isocentricity is about 0.91 ± 0.11 mm. Agreement between the measured isocenters of irradiation and imaging coordinates was determined. The short-term reproducibility test yielded 0.5 ± 0.1 mm between the imaging isocenter and the irradiation isocenter, with a maximum displacement of 0.7 ± 0.1 mm. Conclusion: The 3-D dosimeter can be very useful in precise verification of targeting for a small animal irradiation research. In addition, a single 3-D dosimeter can provide information in both geometric and dosimetric uncertainty, which is crucial for translational studies.

  19. Treatment planning for image-guided neuro-vascular interventions using patient-specific 3D printed phantoms

    Science.gov (United States)

    Russ, M.; O'Hara, R.; Setlur Nagesh, S. V.; Mokin, M.; Jimenez, C.; Siddiqui, A.; Bednarek, D.; Rudin, S.; Ionita, C.

    2015-03-01

    Minimally invasive endovascular image-guided interventions (EIGIs) are the preferred procedures for treatment of a wide range of vascular disorders. Despite benefits including reduced trauma and recovery time, EIGIs have their own challenges. Remote catheter actuation and challenging anatomical morphology may lead to erroneous endovascular device selections, delays or even complications such as vessel injury. EIGI planning using 3D phantoms would allow interventionists to become familiarized with the patient vessel anatomy by first performing the planned treatment on a phantom under standard operating protocols. In this study the optimal workflow to obtain such phantoms from 3D data for interventionist to practice on prior to an actual procedure was investigated. Patientspecific phantoms and phantoms presenting a wide range of challenging geometries were created. Computed Tomographic Angiography (CTA) data was uploaded into a Vitrea 3D station which allows segmentation and resulting stereo-lithographic files to be exported. The files were uploaded using processing software where preloaded vessel structures were included to create a closed-flow vasculature having structural support. The final file was printed, cleaned, connected to a flow loop and placed in an angiographic room for EIGI practice. Various Circle of Willis and cardiac arterial geometries were used. The phantoms were tested for ischemic stroke treatment, distal catheter navigation, aneurysm stenting and cardiac imaging under angiographic guidance. This method should allow for adjustments to treatment plans to be made before the patient is actually in the procedure room and enabling reduced risk of peri-operative complications or delays.

  20. Involved-Site Image-Guided Intensity Modulated Versus 3D Conformal Radiation Therapy in Early Stage Supradiaphragmatic Hodgkin Lymphoma

    Energy Technology Data Exchange (ETDEWEB)

    Filippi, Andrea Riccardo, E-mail: andreariccardo.filippi@unito.it [Department of Oncology, University of Torino, Torino (Italy); Ciammella, Patrizia [Radiation Therapy Unit, Department of Oncology and Advanced Technology, ASMN Hospital IRCCS, Reggio Emilia (Italy); Piva, Cristina; Ragona, Riccardo [Department of Oncology, University of Torino, Torino (Italy); Botto, Barbara [Hematology, Città della Salute e della Scienza, Torino (Italy); Gavarotti, Paolo [Hematology, University of Torino and Città della Salute e della Scienza, Torino (Italy); Merli, Francesco [Hematology Unit, ASMN Hospital IRCCS, Reggio Emilia (Italy); Vitolo, Umberto [Hematology, Città della Salute e della Scienza, Torino (Italy); Iotti, Cinzia [Radiation Therapy Unit, Department of Oncology and Advanced Technology, ASMN Hospital IRCCS, Reggio Emilia (Italy); Ricardi, Umberto [Department of Oncology, University of Torino, Torino (Italy)

    2014-06-01

    Purpose: Image-guided intensity modulated radiation therapy (IG-IMRT) allows for margin reduction and highly conformal dose distribution, with consistent advantages in sparing of normal tissues. The purpose of this retrospective study was to compare involved-site IG-IMRT with involved-site 3D conformal RT (3D-CRT) in the treatment of early stage Hodgkin lymphoma (HL) involving the mediastinum, with efficacy and toxicity as primary clinical endpoints. Methods and Materials: We analyzed 90 stage IIA HL patients treated with either involved-site 3D-CRT or IG-IMRT between 2005 and 2012 in 2 different institutions. Inclusion criteria were favorable or unfavorable disease (according to European Organization for Research and Treatment of Cancer criteria), complete response after 3 to 4 cycles of an adriamycin- bleomycin-vinblastine-dacarbazine (ABVD) regimen plus 30 Gy as total radiation dose. Exclusion criteria were chemotherapy other than ABVD, partial response after ABVD, total radiation dose other than 30 Gy. Clinical endpoints were relapse-free survival (RFS) and acute toxicity. Results: Forty-nine patients were treated with 3D-CRT (54.4%) and 41 with IG-IMRT (45.6%). Median follow-up time was 54.2 months for 3D-CRT and 24.1 months for IG-IMRT. No differences in RFS were observed between the 2 groups, with 1 relapse each. Three-year RFS was 98.7% for 3D-CRT and 100% for IG-IMRT. Grade 2 toxicity events, mainly mucositis, were recorded in 32.7% of 3D-CRT patients (16 of 49) and in 9.8% of IG-IMRT patients (4 of 41). IG-IMRT was significantly associated with a lower incidence of grade 2 acute toxicity (P=.043). Conclusions: RFS rates at 3 years were extremely high in both groups, albeit the median follow-up time is different. Acute tolerance profiles were better for IG-IMRT than for 3D-CRT. Our preliminary results support the clinical safety and efficacy of advanced RT planning and delivery techniques in patients affected with early stage HL, achieving complete

  1. Involved-Site Image-Guided Intensity Modulated Versus 3D Conformal Radiation Therapy in Early Stage Supradiaphragmatic Hodgkin Lymphoma

    International Nuclear Information System (INIS)

    Purpose: Image-guided intensity modulated radiation therapy (IG-IMRT) allows for margin reduction and highly conformal dose distribution, with consistent advantages in sparing of normal tissues. The purpose of this retrospective study was to compare involved-site IG-IMRT with involved-site 3D conformal RT (3D-CRT) in the treatment of early stage Hodgkin lymphoma (HL) involving the mediastinum, with efficacy and toxicity as primary clinical endpoints. Methods and Materials: We analyzed 90 stage IIA HL patients treated with either involved-site 3D-CRT or IG-IMRT between 2005 and 2012 in 2 different institutions. Inclusion criteria were favorable or unfavorable disease (according to European Organization for Research and Treatment of Cancer criteria), complete response after 3 to 4 cycles of an adriamycin- bleomycin-vinblastine-dacarbazine (ABVD) regimen plus 30 Gy as total radiation dose. Exclusion criteria were chemotherapy other than ABVD, partial response after ABVD, total radiation dose other than 30 Gy. Clinical endpoints were relapse-free survival (RFS) and acute toxicity. Results: Forty-nine patients were treated with 3D-CRT (54.4%) and 41 with IG-IMRT (45.6%). Median follow-up time was 54.2 months for 3D-CRT and 24.1 months for IG-IMRT. No differences in RFS were observed between the 2 groups, with 1 relapse each. Three-year RFS was 98.7% for 3D-CRT and 100% for IG-IMRT. Grade 2 toxicity events, mainly mucositis, were recorded in 32.7% of 3D-CRT patients (16 of 49) and in 9.8% of IG-IMRT patients (4 of 41). IG-IMRT was significantly associated with a lower incidence of grade 2 acute toxicity (P=.043). Conclusions: RFS rates at 3 years were extremely high in both groups, albeit the median follow-up time is different. Acute tolerance profiles were better for IG-IMRT than for 3D-CRT. Our preliminary results support the clinical safety and efficacy of advanced RT planning and delivery techniques in patients affected with early stage HL, achieving complete

  2. Simulation of 3D Needle-Tissue Interaction with Application to Image Guided Prostate Brachytherapy

    Institute of Scientific and Technical Information of China (English)

    姜杉; HATA; Nobuhiko; 肖渤瀚; 安蔚瑾

    2010-01-01

    To improve global control of disease and reduce global toxicity, a complex seed distribution pattern should be achieved with great accuracy during brachytherapy.However, the interaction between the needle and prostate will cause large deformation of soft tissue.As a result, seeds will be misplaced, sharp demarcation between irradiated volume and healthy structures is unavailable and this will cause side effects such as impotence and urinary incontinence.In this paper, a 3D nonlinear dynamic finite element s...

  3. Medical applications of fast 3D cameras in real-time image-guided radiotherapy (IGRT) of cancer

    Science.gov (United States)

    Li, Shidong; Li, Tuotuo; Geng, Jason

    2013-03-01

    Dynamic volumetric medical imaging (4DMI) has reduced motion artifacts, increased early diagnosis of small mobile tumors, and improved target definition for treatment planning. High speed cameras for video, X-ray, or other forms of sequential imaging allow a live tracking of external or internal movement useful for real-time image-guided radiation therapy (IGRT). However, none of 4DMI can track real-time organ motion and no camera has correlated with 4DMI to show volumetric changes. With a brief review of various IGRT techniques, we propose a fast 3D camera for live-video stereovision, an automatic surface-motion identifier to classify body or respiratory motion, a mechanical model for synchronizing the external surface movement with the internal target displacement by combination use of the real-time stereovision and pre-treatment 4DMI, and dynamic multi-leaf collimation for adaptive aiming the moving target. Our preliminary results demonstrate that the technique is feasible and efficient in IGRT of mobile targets. A clinical trial has been initiated for validation of its spatial and temporal accuracies and dosimetric impact for intensity-modulated RT (IMRT), volumetric-modulated arc therapy (VMAT), and stereotactic body radiotherapy (SBRT) of any mobile tumors. The technique can be extended for surface-guided stereotactic needle insertion in biopsy of small lung nodules.

  4. Quantitative Assessment of Variational Surface Reconstruction from Sparse Point Clouds in Freehand 3D Ultrasound Imaging during Image-Guided Tumor Ablation

    Directory of Open Access Journals (Sweden)

    Shuangcheng Deng

    2016-04-01

    Full Text Available Surface reconstruction for freehand 3D ultrasound is used to provide 3D visualization of a VOI (volume of interest during image-guided tumor ablation surgery. This is a challenge because the recorded 2D B-scans are not only sparse but also non-parallel. To solve this issue, we established a framework to reconstruct the surface of freehand 3D ultrasound imaging in 2011. The key technique for surface reconstruction in that framework is based on variational interpolation presented by Greg Turk for shape transformation and is named Variational Surface Reconstruction (VSR. The main goal of this paper is to evaluate the quality of surface reconstructions, especially when the input data are extremely sparse point clouds from freehand 3D ultrasound imaging, using four methods: Ball Pivoting, Power Crust, Poisson, and VSR. Four experiments are conducted, and quantitative metrics, such as the Hausdorff distance, are introduced for quantitative assessment. The experiment results show that the performance of the proposed VSR method is the best of the four methods at reconstructing surface from sparse data. The VSR method can produce a close approximation to the original surface from as few as two contours, whereas the other three methods fail to do so. The experiment results also illustrate that the reproducibility of the VSR method is the best of the four methods.

  5. 3D Imager and Method for 3D imaging

    NARCIS (Netherlands)

    Kumar, P.; Staszewski, R.; Charbon, E.

    2013-01-01

    3D imager comprising at least one pixel, each pixel comprising a photodetectorfor detecting photon incidence and a time-to-digital converter system configured for referencing said photon incidence to a reference clock, and further comprising a reference clock generator provided for generating the re

  6. Advanced 3-D Ultrasound Imaging

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer

    The main purpose of the PhD project was to develop methods that increase the 3-D ultrasound imaging quality available for the medical personnel in the clinic. Acquiring a 3-D volume gives the medical doctor the freedom to investigate the measured anatomy in any slice desirable after the scan has...... beamforming. This is achieved partly because synthetic aperture imaging removes the limitation of a fixed transmit focal depth and instead enables dynamic transmit focusing. Lately, the major ultrasound companies have produced ultrasound scanners using 2-D transducer arrays with enough transducer elements...

  7. SU-E-J-55: End-To-End Effectiveness Analysis of 3D Surface Image Guided Voluntary Breath-Holding Radiotherapy for Left Breast

    International Nuclear Information System (INIS)

    Purpose To evaluate the effectiveness of using 3D-surface-image to guide breath-holding (BH) left-side breast treatment. Methods Two 3D surface image guided BH procedures were implemented and evaluated: normal-BH, taking BH at a comfortable level, and deep-inspiration-breath-holding (DIBH). A total of 20 patients (10 Normal-BH and 10 DIBH) were recruited. Patients received a BH evaluation using a commercialized 3D-surface- tracking-system (VisionRT, London, UK) to quantify the reproducibility of BH positions prior to CT scan. Tangential 3D/IMRT plans were conducted. Patients were initially setup under free-breathing (FB) condition using the FB surface obtained from the untaged CT to ensure a correct patient position. Patients were then guided to reach the planned BH position using the BH surface obtained from the BH CT. Action-levels were set at each phase of treatment process based on the information provided by the 3D-surface-tracking-system for proper interventions (eliminate/re-setup/ re-coaching). We reviewed the frequency of interventions to evaluate its effectiveness. The FB-CBCT and port-film were utilized to evaluate the accuracy of 3D-surface-guided setups. Results 25% of BH candidates with BH positioning uncertainty > 2mm are eliminated prior to CT scan. For >90% of fractions, based on the setup deltas from3D-surface-trackingsystem, adjustments of patient setup are needed after the initial-setup using laser. 3D-surface-guided-setup accuracy is comparable as CBCT. For the BH guidance, frequency of interventions (a re-coaching/re-setup) is 40%(Normal-BH)/91%(DIBH) of treatments for the first 5-fractions and then drops to 16%(Normal-BH)/46%(DIBH). The necessity of re-setup is highly patient-specific for Normal-BH but highly random among patients for DIBH. Overall, a −0.8±2.4 mm accuracy of the anterior pericardial shadow position was achieved. Conclusion 3D-surface-image technology provides effective intervention to the treatment process and ensures

  8. SU-E-J-55: End-To-End Effectiveness Analysis of 3D Surface Image Guided Voluntary Breath-Holding Radiotherapy for Left Breast

    Energy Technology Data Exchange (ETDEWEB)

    Lin, M; Feigenberg, S [University of Maryland School of Medicine, Baltimore, MD (United States)

    2015-06-15

    Purpose To evaluate the effectiveness of using 3D-surface-image to guide breath-holding (BH) left-side breast treatment. Methods Two 3D surface image guided BH procedures were implemented and evaluated: normal-BH, taking BH at a comfortable level, and deep-inspiration-breath-holding (DIBH). A total of 20 patients (10 Normal-BH and 10 DIBH) were recruited. Patients received a BH evaluation using a commercialized 3D-surface- tracking-system (VisionRT, London, UK) to quantify the reproducibility of BH positions prior to CT scan. Tangential 3D/IMRT plans were conducted. Patients were initially setup under free-breathing (FB) condition using the FB surface obtained from the untaged CT to ensure a correct patient position. Patients were then guided to reach the planned BH position using the BH surface obtained from the BH CT. Action-levels were set at each phase of treatment process based on the information provided by the 3D-surface-tracking-system for proper interventions (eliminate/re-setup/ re-coaching). We reviewed the frequency of interventions to evaluate its effectiveness. The FB-CBCT and port-film were utilized to evaluate the accuracy of 3D-surface-guided setups. Results 25% of BH candidates with BH positioning uncertainty > 2mm are eliminated prior to CT scan. For >90% of fractions, based on the setup deltas from3D-surface-trackingsystem, adjustments of patient setup are needed after the initial-setup using laser. 3D-surface-guided-setup accuracy is comparable as CBCT. For the BH guidance, frequency of interventions (a re-coaching/re-setup) is 40%(Normal-BH)/91%(DIBH) of treatments for the first 5-fractions and then drops to 16%(Normal-BH)/46%(DIBH). The necessity of re-setup is highly patient-specific for Normal-BH but highly random among patients for DIBH. Overall, a −0.8±2.4 mm accuracy of the anterior pericardial shadow position was achieved. Conclusion 3D-surface-image technology provides effective intervention to the treatment process and ensures

  9. 3D vector flow imaging

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes

    The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... conventional methods can estimate only the axial component. Several approaches for 3D vector velocity estimation have been suggested, but none of these methods have so far produced convincing in vivo results nor have they been adopted by commercial manufacturers. The basis for this project is the Transverse...... on the TO fields are suggested. They can be used to optimize the TO method. In the third part, a TO method for 3D vector velocity estimation is proposed. It employs a 2D phased array transducer and decouples the velocity estimation into three velocity components, which are estimated simultaneously based on 5...

  10. 3D non-rigid registration using surface and local salient features for transrectal ultrasound image-guided prostate biopsy

    Science.gov (United States)

    Yang, Xiaofeng; Akbari, Hamed; Halig, Luma; Fei, Baowei

    2011-03-01

    We present a 3D non-rigid registration algorithm for the potential use in combining PET/CT and transrectal ultrasound (TRUS) images for targeted prostate biopsy. Our registration is a hybrid approach that simultaneously optimizes the similarities from point-based registration and volume matching methods. The 3D registration is obtained by minimizing the distances of corresponding points at the surface and within the prostate and by maximizing the overlap ratio of the bladder neck on both images. The hybrid approach not only capture deformation at the prostate surface and internal landmarks but also the deformation at the bladder neck regions. The registration uses a soft assignment and deterministic annealing process. The correspondences are iteratively established in a fuzzy-to-deterministic approach. B-splines are used to generate a smooth non-rigid spatial transformation. In this study, we tested our registration with pre- and postbiopsy TRUS images of the same patients. Registration accuracy is evaluated using manual defined anatomic landmarks, i.e. calcification. The root-mean-squared (RMS) of the difference image between the reference and floating images was decreased by 62.6+/-9.1% after registration. The mean target registration error (TRE) was 0.88+/-0.16 mm, i.e. less than 3 voxels with a voxel size of 0.38×0.38×0.38 mm3 for all five patients. The experimental results demonstrate the robustness and accuracy of the 3D non-rigid registration algorithm.

  11. Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy

    International Nuclear Information System (INIS)

    Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped; the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 × 376 × 630 voxels. Conclusions: The proposed needle segmentation

  12. Patellar segmentation from 3D magnetic resonance images using guided recursive ray-tracing for edge pattern detection

    Science.gov (United States)

    Cheng, Ruida; Jackson, Jennifer N.; McCreedy, Evan S.; Gandler, William; Eijkenboom, J. J. F. A.; van Middelkoop, M.; McAuliffe, Matthew J.; Sheehan, Frances T.

    2016-03-01

    The paper presents an automatic segmentation methodology for the patellar bone, based on 3D gradient recalled echo and gradient recalled echo with fat suppression magnetic resonance images. Constricted search space outlines are incorporated into recursive ray-tracing to segment the outer cortical bone. A statistical analysis based on the dependence of information in adjacent slices is used to limit the search in each image to between an outer and inner search region. A section based recursive ray-tracing mechanism is used to skip inner noise regions and detect the edge boundary. The proposed method achieves higher segmentation accuracy (0.23mm) than the current state-of-the-art methods with the average dice similarity coefficient of 96.0% (SD 1.3%) agreement between the auto-segmentation and ground truth surfaces.

  13. MRI-based 3D pelvic autonomous innervation: a first step towards image-guided pelvic surgery

    International Nuclear Information System (INIS)

    To analyse pelvic autonomous innervation with magnetic resonance imaging (MRI) in comparison with anatomical macroscopic dissection on cadavers. Pelvic MRI was performed in eight adult human cadavers (five men and three women) using a total of four sequences each: T1, T1 fat saturation, T2, diffusion weighed. Images were analysed with segmentation software in order to extract nervous tissue. Key height points of the pelvis autonomous innervation were located in every specimen. Standardised pelvis dissections were then performed. Distances between the same key points and the three anatomical references forming a coordinate system were measured on MRIs and dissections. Concordance (Lin's concordance correlation coefficient) between MRI and dissection was calculated. MRI acquisition allowed an adequate visualization of the autonomous innervation. Comparison between 3D MRI images and dissection showed concordant pictures. The statistical analysis showed a mean difference of less than 1 cm between MRI and dissection measures and a correct concordance correlation coefficient on at least two coordinates for each point. Our acquisition and post-processing method demonstrated that MRI is suitable for detection of autonomous pelvic innervations and can offer a preoperative nerve cartography. (orig.)

  14. MRI-based 3D pelvic autonomous innervation: a first step towards image-guided pelvic surgery

    Energy Technology Data Exchange (ETDEWEB)

    Bertrand, M.M. [University Montpellier I, Laboratory of Experimental Anatomy Faculty of Medicine Montpellier-Nimes, Montpellier (France); Macri, F.; Beregi, J.P. [Nimes University Hospital, University Montpellier 1, Radiology Department, Nimes (France); Mazars, R.; Prudhomme, M. [University Montpellier I, Laboratory of Experimental Anatomy Faculty of Medicine Montpellier-Nimes, Montpellier (France); Nimes University Hospital, University Montpellier 1, Digestive Surgery Department, Nimes (France); Droupy, S. [Nimes University Hospital, University Montpellier 1, Urology-Andrology Department, Nimes (France)

    2014-08-15

    To analyse pelvic autonomous innervation with magnetic resonance imaging (MRI) in comparison with anatomical macroscopic dissection on cadavers. Pelvic MRI was performed in eight adult human cadavers (five men and three women) using a total of four sequences each: T1, T1 fat saturation, T2, diffusion weighed. Images were analysed with segmentation software in order to extract nervous tissue. Key height points of the pelvis autonomous innervation were located in every specimen. Standardised pelvis dissections were then performed. Distances between the same key points and the three anatomical references forming a coordinate system were measured on MRIs and dissections. Concordance (Lin's concordance correlation coefficient) between MRI and dissection was calculated. MRI acquisition allowed an adequate visualization of the autonomous innervation. Comparison between 3D MRI images and dissection showed concordant pictures. The statistical analysis showed a mean difference of less than 1 cm between MRI and dissection measures and a correct concordance correlation coefficient on at least two coordinates for each point. Our acquisition and post-processing method demonstrated that MRI is suitable for detection of autonomous pelvic innervations and can offer a preoperative nerve cartography. (orig.)

  15. Needle segmentation using 3D Hough transform in 3D TRUS guided prostate transperineal therapy

    Energy Technology Data Exchange (ETDEWEB)

    Qiu Wu [Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Imaging Research Laboratories, Robarts Research Institute, Western University, London, Ontario N6A 5K8 (Canada); Yuchi Ming; Ding Mingyue [Department of Biomedical Engineering, School of Life Science and Technology, Huazhong University of Science and Technology, Wuhan, Hubei 430074 (China); Tessier, David; Fenster, Aaron [Imaging Research Laboratories, Robarts Research Institute, University of Western Ontario, London, Ontario N6A 5K8 (Canada)

    2013-04-15

    Purpose: Prostate adenocarcinoma is the most common noncutaneous malignancy in American men with over 200 000 new cases diagnosed each year. Prostate interventional therapy, such as cryotherapy and brachytherapy, is an effective treatment for prostate cancer. Its success relies on the correct needle implant position. This paper proposes a robust and efficient needle segmentation method, which acts as an aid to localize the needle in three-dimensional (3D) transrectal ultrasound (TRUS) guided prostate therapy. Methods: The procedure of locating the needle in a 3D TRUS image is a three-step process. First, the original 3D ultrasound image containing a needle is cropped; the cropped image is then converted to a binary format based on its histogram. Second, a 3D Hough transform based needle segmentation method is applied to the 3D binary image in order to locate the needle axis. The position of the needle endpoint is finally determined by an optimal threshold based analysis of the intensity probability distribution. The overall efficiency is improved through implementing a coarse-fine searching strategy. The proposed method was validated in tissue-mimicking agar phantoms, chicken breast phantoms, and 3D TRUS patient images from prostate brachytherapy and cryotherapy procedures by comparison to the manual segmentation. The robustness of the proposed approach was tested by means of varying parameters such as needle insertion angle, needle insertion length, binarization threshold level, and cropping size. Results: The validation results indicate that the proposed Hough transform based method is accurate and robust, with an achieved endpoint localization accuracy of 0.5 mm for agar phantom images, 0.7 mm for chicken breast phantom images, and 1 mm for in vivo patient cryotherapy and brachytherapy images. The mean execution time of needle segmentation algorithm was 2 s for a 3D TRUS image with size of 264 Multiplication-Sign 376 Multiplication-Sign 630 voxels. Conclusions

  16. Improvement in toxicity in high risk prostate cancer patients treated with image-guided intensity-modulated radiotherapy compared to 3D conformal radiotherapy without daily image guidance

    International Nuclear Information System (INIS)

    Image-guided radiotherapy (IGRT) facilitates the delivery of a very precise radiation dose. In this study we compare the toxicity and biochemical progression-free survival between patients treated with daily image-guided intensity-modulated radiotherapy (IG-IMRT) and 3D conformal radiotherapy (3DCRT) without daily image guidance for high risk prostate cancer (PCa). A total of 503 high risk PCa patients treated with radiotherapy (RT) and endocrine treatment between 2000 and 2010 were retrospectively reviewed. 115 patients were treated with 3DCRT, and 388 patients were treated with IG-IMRT. 3DCRT patients were treated to 76 Gy and without daily image guidance and with 1–2 cm PTV margins. IG-IMRT patients were treated to 78 Gy based on daily image guidance of fiducial markers, and the PTV margins were 5–7 mm. Furthermore, the dose-volume constraints to both the rectum and bladder were changed with the introduction of IG-IMRT. The 2-year actuarial likelihood of developing grade > = 2 GI toxicity following RT was 57.3% in 3DCRT patients and 5.8% in IG-IMRT patients (p < 0.001). For GU toxicity the numbers were 41.8% and 29.7%, respectively (p = 0.011). On multivariate analysis, 3DCRT was associated with a significantly increased risk of developing grade > = 2 GI toxicity compared to IG-IMRT (p < 0.001, HR = 11.59 [CI: 6.67-20.14]). 3DCRT was also associated with an increased risk of developing GU toxicity compared to IG-IMRT. The 3-year actuarial biochemical progression-free survival probability was 86.0% for 3DCRT and 90.3% for IG-IMRT (p = 0.386). On multivariate analysis there was no difference in biochemical progression-free survival between 3DCRT and IG-IMRT. The difference in toxicity can be attributed to the combination of the IMRT technique with reduced dose to organs-at-risk, daily image guidance and margin reduction

  17. 3D Chaotic Functions for Image Encryption

    Directory of Open Access Journals (Sweden)

    Pawan N. Khade

    2012-05-01

    Full Text Available This paper proposes the chaotic encryption algorithm based on 3D logistic map, 3D Chebyshev map, and 3D, 2D Arnolds cat map for color image encryption. Here the 2D Arnolds cat map is used for image pixel scrambling and 3D Arnolds cat map is used for R, G, and B component substitution. 3D Chebyshev map is used for key generation and 3D logistic map is used for image scrambling. The use of 3D chaotic functions in the encryption algorithm provide more security by using the, shuffling and substitution to the encrypted image. The Chebyshev map is used for public key encryption and distribution of generated private keys.

  18. The impact of 3D image guided prostate brachytherapy on therapeutic ratio: the Quebec University Hospital experience

    International Nuclear Information System (INIS)

    Purpose: to evaluate the impact of adaptative image-guided brachytherapy on therapeutic outcome and toxicity in prostate cancer. Materials and methods: the 1110 first patients treated at the C.H.U.Q.-l'Hotel-Dieu de Quebec were divided in five groups depending on the technique used for the implantation, the latest being intra operative treatment planning. Biochemical disease free survival (5-b.D.F.S.), toxicities and dosimetric parameters were compared between the groups. Results: 5-b.D.F.S. (A.S.T.R.O. + Houston) were of 88.5% and 90.5% for the whole cohort. The use of intra operative treatment planning resulted in better dosimetric parameters. Clinically, this resulted in a decreased use of urethral catheterization, from 18.8% in group 1 to 5.2% in group 5, and in a reduction in severe acute urinary side effects (21.3 vs 33.3% P = 0.01) when compared with pre-planning. There was also less late gastrointestinal side effects (groups 5 vs 1: 26.6 vs 43.2% P < 0.05). Finally, when compared with pre-planning, intra operative treatment planning was associated with a smaller reduction between planned D90 and the dose calculated at the CT scan 1 month after the implant (38 vs 66 Gy). Conclusion: the evolution of prostate brachytherapy technique toward intra operative treatment planning allowed dosimetric gains which resulted in significant clinical benefits by increasing the therapeutic ratio mainly through a decreased urinary toxicity. A longer follow-up will answer the question whether there is an impact on 5-b.D.F.S.. (authors)

  19. Magnetic resonance imaging-targeted, 3D transrectal ultrasound-guided fusion biopsy for prostate cancer: Quantifying the impact of needle delivery error on diagnosis

    International Nuclear Information System (INIS)

    Purpose: Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided “fusion” prostate biopsy intends to reduce the ∼23% false negative rate of clinical two-dimensional TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsies continue to yield false negatives. Therefore, the authors propose to investigate how biopsy system needle delivery error affects the probability of sampling each tumor, by accounting for uncertainties due to guidance system error, image registration error, and irregular tumor shapes. Methods: T2-weighted, dynamic contrast-enhanced T1-weighted, and diffusion-weighted prostate MRI and 3D TRUS images were obtained from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D tumor surfaces that were registered to the 3D TRUS images using an iterative closest point prostate surface-based method to yield 3D binary images of the suspicious regions in the TRUS context. The probabilityP of obtaining a sample of tumor tissue in one biopsy core was calculated by integrating a 3D Gaussian distribution over each suspicious region domain. Next, the authors performed an exhaustive search to determine the maximum root mean squared error (RMSE, in mm) of a biopsy system that gives P ≥ 95% for each tumor sample, and then repeated this procedure for equal-volume spheres corresponding to each tumor sample. Finally, the authors investigated the effect of probe-axis-direction error on measured tumor burden by studying the relationship between the error and estimated percentage of core involvement. Results: Given a 3.5 mm RMSE for contemporary fusion biopsy systems,P ≥ 95% for 21 out of 81 tumors. The authors determined that for a biopsy system with 3.5 mm RMSE, one cannot expect to sample tumors of approximately 1 cm3 or smaller with 95% probability with only one biopsy core. The predicted maximum RMSE giving P ≥ 95% for each tumor was

  20. Magnetic resonance imaging-targeted, 3D transrectal ultrasound-guided fusion biopsy for prostate cancer: Quantifying the impact of needle delivery error on diagnosis

    Energy Technology Data Exchange (ETDEWEB)

    Martin, Peter R., E-mail: pmarti46@uwo.ca [Department of Medical Biophysics, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Cool, Derek W. [Department of Medical Imaging, The University of Western Ontario, London, Ontario N6A 3K7, Canada and Robarts Research Institute, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Romagnoli, Cesare [Department of Medical Imaging, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Fenster, Aaron [Department of Medical Biophysics, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Department of Medical Imaging, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Robarts Research Institute, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Ward, Aaron D. [Department of Medical Biophysics, The University of Western Ontario, London, Ontario N6A 3K7 (Canada); Department of Oncology, The University of Western Ontario, London, Ontario N6A 3K7 (Canada)

    2014-07-15

    Purpose: Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided “fusion” prostate biopsy intends to reduce the ∼23% false negative rate of clinical two-dimensional TRUS-guided sextant biopsy. Although it has been reported to double the positive yield, MRI-targeted biopsies continue to yield false negatives. Therefore, the authors propose to investigate how biopsy system needle delivery error affects the probability of sampling each tumor, by accounting for uncertainties due to guidance system error, image registration error, and irregular tumor shapes. Methods: T2-weighted, dynamic contrast-enhanced T1-weighted, and diffusion-weighted prostate MRI and 3D TRUS images were obtained from 49 patients. A radiologist and radiology resident contoured 81 suspicious regions, yielding 3D tumor surfaces that were registered to the 3D TRUS images using an iterative closest point prostate surface-based method to yield 3D binary images of the suspicious regions in the TRUS context. The probabilityP of obtaining a sample of tumor tissue in one biopsy core was calculated by integrating a 3D Gaussian distribution over each suspicious region domain. Next, the authors performed an exhaustive search to determine the maximum root mean squared error (RMSE, in mm) of a biopsy system that gives P ≥ 95% for each tumor sample, and then repeated this procedure for equal-volume spheres corresponding to each tumor sample. Finally, the authors investigated the effect of probe-axis-direction error on measured tumor burden by studying the relationship between the error and estimated percentage of core involvement. Results: Given a 3.5 mm RMSE for contemporary fusion biopsy systems,P ≥ 95% for 21 out of 81 tumors. The authors determined that for a biopsy system with 3.5 mm RMSE, one cannot expect to sample tumors of approximately 1 cm{sup 3} or smaller with 95% probability with only one biopsy core. The predicted maximum RMSE giving P ≥ 95% for each

  1. 3D ultrafast ultrasound imaging in vivo

    International Nuclear Information System (INIS)

    Very high frame rate ultrasound imaging has recently allowed for the extension of the applications of echography to new fields of study such as the functional imaging of the brain, cardiac electrophysiology, and the quantitative imaging of the intrinsic mechanical properties of tumors, to name a few, non-invasively and in real time. In this study, we present the first implementation of Ultrafast Ultrasound Imaging in 3D based on the use of either diverging or plane waves emanating from a sparse virtual array located behind the probe. It achieves high contrast and resolution while maintaining imaging rates of thousands of volumes per second. A customized portable ultrasound system was developed to sample 1024 independent channels and to drive a 32  ×  32 matrix-array probe. Its ability to track in 3D transient phenomena occurring in the millisecond range within a single ultrafast acquisition was demonstrated for 3D Shear-Wave Imaging, 3D Ultrafast Doppler Imaging, and, finally, 3D Ultrafast combined Tissue and Flow Doppler Imaging. The propagation of shear waves was tracked in a phantom and used to characterize its stiffness. 3D Ultrafast Doppler was used to obtain 3D maps of Pulsed Doppler, Color Doppler, and Power Doppler quantities in a single acquisition and revealed, at thousands of volumes per second, the complex 3D flow patterns occurring in the ventricles of the human heart during an entire cardiac cycle, as well as the 3D in vivo interaction of blood flow and wall motion during the pulse wave in the carotid at the bifurcation. This study demonstrates the potential of 3D Ultrafast Ultrasound Imaging for the 3D mapping of stiffness, tissue motion, and flow in humans in vivo and promises new clinical applications of ultrasound with reduced intra—and inter-observer variability. (fast track communication)

  2. 3D molecular imaging SIMS

    Energy Technology Data Exchange (ETDEWEB)

    Gillen, Greg [Surface and Microanalysis Science Division, National Institute of Standards and Technology, Gaithersburg, MD 20899-8371 (United States)]. E-mail: Greg.gillen@nist.gov; Fahey, Albert [Surface and Microanalysis Science Division, National Institute of Standards and Technology, Gaithersburg, MD 20899-8371 (United States); Wagner, Matt [Surface and Microanalysis Science Division, National Institute of Standards and Technology, Gaithersburg, MD 20899-8371 (United States); Mahoney, Christine [Surface and Microanalysis Science Division, National Institute of Standards and Technology, Gaithersburg, MD 20899-8371 (United States)

    2006-07-30

    Thin monolayer and bilayer films of spin cast poly(methyl methacrylate) (PMMA), poly(2-hydroxyethyl methacrylate) (PHEMA), poly(lactic) acid (PLA) and PLA doped with several pharmaceuticals have been analyzed by dynamic SIMS using SF{sub 5} {sup +} polyatomic primary ion bombardment. Each of these systems exhibited minimal primary beam-induced degradation under cluster ion bombardment allowing molecular depth profiles to be obtained through the film. By combing secondary ion imaging with depth profiling, three-dimensional molecular image depth profiles have been obtained from these systems. In another approach, bevel cross-sections are cut in the samples with the SF{sub 5} {sup +} primary ion beam to produce a laterally magnified cross-section of the sample that does not contain the beam-induced damage that would be induced by conventional focussed ion beam (FIB) cross-sectioning. The bevel surface can then be examined using cluster SIMS imaging or other appropriate microanalysis technique.

  3. 3D Backscatter Imaging System

    Science.gov (United States)

    Turner, D. Clark (Inventor); Whitaker, Ross (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  4. 3D-printed guiding templates for improved osteosarcoma resection

    Science.gov (United States)

    Ma, Limin; Zhou, Ye; Zhu, Ye; Lin, Zefeng; Wang, Yingjun; Zhang, Yu; Xia, Hong; Mao, Chuanbin

    2016-03-01

    Osteosarcoma resection is challenging due to the variable location of tumors and their proximity with surrounding tissues. It also carries a high risk of postoperative complications. To overcome the challenge in precise osteosarcoma resection, computer-aided design (CAD) was used to design patient-specific guiding templates for osteosarcoma resection on the basis of the computer tomography (CT) scan and magnetic resonance imaging (MRI) of the osteosarcoma of human patients. Then 3D printing technique was used to fabricate the guiding templates. The guiding templates were used to guide the osteosarcoma surgery, leading to more precise resection of the tumorous bone and the implantation of the bone implants, less blood loss, shorter operation time and reduced radiation exposure during the operation. Follow-up studies show that the patients recovered well to reach a mean Musculoskeletal Tumor Society score of 27.125.

  5. Phenotypic transition maps of 3D breast acini obtained by imaging-guided agent-based modeling

    Energy Technology Data Exchange (ETDEWEB)

    Tang, Jonathan; Enderling, Heiko; Becker-Weimann, Sabine; Pham, Christopher; Polyzos, Aris; Chen, Chen-Yi; Costes, Sylvain V

    2011-02-18

    We introduce an agent-based model of epithelial cell morphogenesis to explore the complex interplay between apoptosis, proliferation, and polarization. By varying the activity levels of these mechanisms we derived phenotypic transition maps of normal and aberrant morphogenesis. These maps identify homeostatic ranges and morphologic stability conditions. The agent-based model was parameterized and validated using novel high-content image analysis of mammary acini morphogenesis in vitro with focus on time-dependent cell densities, proliferation and death rates, as well as acini morphologies. Model simulations reveal apoptosis being necessary and sufficient for initiating lumen formation, but cell polarization being the pivotal mechanism for maintaining physiological epithelium morphology and acini sphericity. Furthermore, simulations highlight that acinus growth arrest in normal acini can be achieved by controlling the fraction of proliferating cells. Interestingly, our simulations reveal a synergism between polarization and apoptosis in enhancing growth arrest. After validating the model with experimental data from a normal human breast line (MCF10A), the system was challenged to predict the growth of MCF10A where AKT-1 was overexpressed, leading to reduced apoptosis. As previously reported, this led to non growth-arrested acini, with very large sizes and partially filled lumen. However, surprisingly, image analysis revealed a much lower nuclear density than observed for normal acini. The growth kinetics indicates that these acini grew faster than the cells comprising it. The in silico model could not replicate this behavior, contradicting the classic paradigm that ductal carcinoma in situ is only the result of high proliferation and low apoptosis. Our simulations suggest that overexpression of AKT-1 must also perturb cell-cell and cell-ECM communication, reminding us that extracellular context can dictate cellular behavior.

  6. Dosimetric analysis of 3D image-guided HDR brachytherapy planning for the treatment of cervical cancer: is point A-based dose prescription still valid in image-guided brachytherapy?

    Science.gov (United States)

    Kim, Hayeon; Beriwal, Sushil; Houser, Chris; Huq, M Saiful

    2011-01-01

    The purpose of this study was to analyze the dosimetric outcome of 3D image-guided high-dose-rate (HDR) brachytherapy planning for cervical cancer treatment and compare dose coverage of high-risk clinical target volume (HRCTV) to traditional Point A dose. Thirty-two patients with stage IA2-IIIB cervical cancer were treated using computed tomography/magnetic resonance imaging-based image-guided HDR brachytherapy (IGBT). Brachytherapy dose prescription was 5.0-6.0 Gy per fraction for a total 5 fractions. The HRCTV and organs at risk (OARs) were delineated following the GYN GEC/ESTRO guidelines. Total doses for HRCTV, OARs, Point A, and Point T from external beam radiotherapy and brachytherapy were summated and normalized to a biologically equivalent dose of 2 Gy per fraction (EQD2). The total planned D90 for HRCTV was 80-85 Gy, whereas the dose to 2 mL of bladder, rectum, and sigmoid was limited to 85 Gy, 75 Gy, and 75 Gy, respectively. The mean D90 and its standard deviation for HRCTV was 83.2 ± 4.3 Gy. This is significantly higher (p IGBT in HDR cervical cancer treatment needs advanced concept of evaluation in dosimetry with clinical outcome data about whether this approach improves local control and/or decreases toxicities. PMID:20488690

  7. A 3D global-to-local deformable mesh model based registration and anatomy-constrained segmentation method for image guided prostate radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Zhou Jinghao; Kim, Sung; Jabbour, Salma; Goyal, Sharad; Haffty, Bruce; Chen, Ting; Levinson, Lydia; Metaxas, Dimitris; Yue, Ning J. [Department of Radiation Oncology, UMDNJ-Robert Wood Johnson Medical School, Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States); Department of Bioinformatics, UMDNJ-Robert Wood Johnson Medical School, Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States); Department of Radiation Oncology, UMDNJ-Robert Wood Johnson Medical School, Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States); Department of Computer Science, Rutgers, State University of New Jersey, Piscataway, New Jersey 08854 (United States); Department of Radiation Oncology, UMDNJ-Robert Wood Johnson Medical School, Cancer Institute of New Jersey, New Brunswick, New Jersey 08903 (United States)

    2010-03-15

    Purpose: In the external beam radiation treatment of prostate cancers, successful implementation of adaptive radiotherapy and conformal radiation dose delivery is highly dependent on precise and expeditious segmentation and registration of the prostate volume between the simulation and the treatment images. The purpose of this study is to develop a novel, fast, and accurate segmentation and registration method to increase the computational efficiency to meet the restricted clinical treatment time requirement in image guided radiotherapy. Methods: The method developed in this study used soft tissues to capture the transformation between the 3D planning CT (pCT) images and 3D cone-beam CT (CBCT) treatment images. The method incorporated a global-to-local deformable mesh model based registration framework as well as an automatic anatomy-constrained robust active shape model (ACRASM) based segmentation algorithm in the 3D CBCT images. The global registration was based on the mutual information method, and the local registration was to minimize the Euclidian distance of the corresponding nodal points from the global transformation of deformable mesh models, which implicitly used the information of the segmented target volume. The method was applied on six data sets of prostate cancer patients. Target volumes delineated by the same radiation oncologist on the pCT and CBCT were chosen as the benchmarks and were compared to the segmented and registered results. The distance-based and the volume-based estimators were used to quantitatively evaluate the results of segmentation and registration. Results: The ACRASM segmentation algorithm was compared to the original active shape model (ASM) algorithm by evaluating the values of the distance-based estimators. With respect to the corresponding benchmarks, the mean distance ranged from -0.85 to 0.84 mm for ACRASM and from -1.44 to 1.17 mm for ASM. The mean absolute distance ranged from 1.77 to 3.07 mm for ACRASM and from 2.45 to

  8. Multiplane 3D superresolution optical fluctuation imaging

    CERN Document Server

    Geissbuehler, Stefan; Godinat, Aurélien; Bocchio, Noelia L; Dubikovskaya, Elena A; Lasser, Theo; Leutenegger, Marcel

    2013-01-01

    By switching fluorophores on and off in either a deterministic or a stochastic manner, superresolution microscopy has enabled the imaging of biological structures at resolutions well beyond the diffraction limit. Superresolution optical fluctuation imaging (SOFI) provides an elegant way of overcoming the diffraction limit in all three spatial dimensions by computing higher-order cumulants of image sequences of blinking fluorophores acquired with a conventional widefield microscope. So far, three-dimensional (3D) SOFI has only been demonstrated by sequential imaging of multiple depth positions. Here we introduce a versatile imaging scheme which allows for the simultaneous acquisition of multiple focal planes. Using 3D cross-cumulants, we show that the depth sampling can be increased. Consequently, the simultaneous acquisition of multiple focal planes reduces the acquisition time and hence the photo-bleaching of fluorescent markers. We demonstrate multiplane 3D SOFI by imaging the mitochondria network in fixed ...

  9. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  10. Micromachined Ultrasonic Transducers for 3-D Imaging

    DEFF Research Database (Denmark)

    Christiansen, Thomas Lehrmann

    Real-time ultrasound imaging is a widely used technique in medical diagnostics. Recently, ultrasound systems offering real-time imaging in 3-D has emerged. However, the high complexity of the transducer probes and the considerable increase in data to be processed compared to conventional 2-D ultr...

  11. 3D Stereo Visualization for Mobile Robot Tele-Guide

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2006-01-01

    learning and decision performance. Works in the literature have demonstrated how stereo vision contributes to improve perception of some depth cues often for abstract tasks, while little can be found about the advantages of stereoscopic visualization in mobile robot tele-guide applications. This work...... intends to contribute to this aspect by investigating stereoscopic robot tele-guide under different conditions, including typical navigation scenarios and the use of synthetic and real images. The purpose of this work is also to investigate how user performance may vary when employing different display......The use of 3D stereoscopic visualization may provide a user with higher comprehension of remote environments in tele-operation when compared to 2D viewing. In particular, a higher perception of environment depth characteristics, spatial localization, remote ambient layout, as well as faster system...

  12. Feasibility of 3D harmonic contrast imaging

    NARCIS (Netherlands)

    Voormolen, M.M.; Bouakaz, A.; Krenning, B.J.; Lancée, C.; Cate, ten F.; Jong, de N.

    2004-01-01

    Improved endocardial border delineation with the application of contrast agents should allow for less complex and faster tracing algorithms for left ventricular volume analysis. We developed a fast rotating phased array transducer for 3D imaging of the heart with harmonic capabilities making it suit

  13. Acute Toxicity After Image-Guided Intensity Modulated Radiation Therapy Compared to 3D Conformal Radiation Therapy in Prostate Cancer Patients

    Energy Technology Data Exchange (ETDEWEB)

    Wortel, Ruud C.; Incrocci, Luca [Department of Radiation Oncology, Erasmus Medical Center Cancer Institute, Rotterdam (Netherlands); Pos, Floris J.; Lebesque, Joos V.; Witte, Marnix G.; Heide, Uulke A. van der; Herk, Marcel van [Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam (Netherlands); Heemsbergen, Wilma D., E-mail: w.heemsbergen@nki.nl [Department of Radiation Oncology, Netherlands Cancer Institute, Amsterdam (Netherlands)

    2015-03-15

    Purpose: Image-guided intensity modulated radiation therapy (IG-IMRT) allows significant dose reductions to organs at risk in prostate cancer patients. However, clinical data identifying the benefits of IG-IMRT in daily practice are scarce. The purpose of this study was to compare dose distributions to organs at risk and acute gastrointestinal (GI) and genitourinary (GU) toxicity levels of patients treated to 78 Gy with either IG-IMRT or 3D-CRT. Methods and Materials: Patients treated with 3D-CRT (n=215) and IG-IMRT (n=260) receiving 78 Gy in 39 fractions within 2 randomized trials were selected. Dose surface histograms of anorectum, anal canal, and bladder were calculated. Identical toxicity questionnaires were distributed at baseline, prior to fraction 20 and 30 and at 90 days after treatment. Radiation Therapy Oncology Group (RTOG) grade ≥1, ≥2, and ≥3 endpoints were derived directly from questionnaires. Univariate and multivariate binary logistic regression analyses were applied. Results: The median volumes receiving 5 to 75 Gy were significantly lower (all P<.001) with IG-IMRT for anorectum, anal canal, and bladder. The mean dose to the anorectum was 34.4 Gy versus 47.3 Gy (P<.001), 23.6 Gy versus 44.6 Gy for the anal canal (P<.001), and 33.1 Gy versus 43.2 Gy for the bladder (P<.001). Significantly lower grade ≥2 toxicity was observed for proctitis, stool frequency ≥6/day, and urinary frequency ≥12/day. IG-IMRT resulted in significantly lower overall RTOG grade ≥2 GI toxicity (29% vs 49%, respectively, P=.002) and overall GU grade ≥2 toxicity (38% vs 48%, respectively, P=.009). Conclusions: A clinically meaningful reduction in dose to organs at risk and acute toxicity levels was observed in IG-IMRT patients, as a result of improved technique and tighter margins. Therefore reduced late toxicity levels can be expected as well; additional research is needed to quantify such reductions.

  14. 3D Membrane Imaging and Porosity Visualization

    KAUST Repository

    Sundaramoorthi, Ganesh

    2016-03-03

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore size in different layers orthogonal and parallel to the membrane surface. The 3D-reconstruction enabled additionally the visualization of pore interconnectivity in different parts of the membrane. The method was demonstrated for a block copolymer porous membrane and can be extended to other membranes with application in ultrafiltration, supports for forward osmosis, etc, offering a complete view of the transport paths in the membrane.

  15. 2D-3D image registration in diagnostic and interventional X-Ray imaging

    NARCIS (Netherlands)

    Bom, I.M.J. van der

    2010-01-01

    Clinical procedures that are conventionally guided by 2D x-ray imaging, may benefit from the additional spatial information provided by 3D image data. For instance, guidance of minimally invasive procedures with CT or MRI data provides 3D spatial information and visualization of structures that are

  16. FIRE: an open-software suite for real-time 2D/3D image registration for image guided radiotherapy research

    Science.gov (United States)

    Furtado, H.; Gendrin, C.; Spoerk, J.; Steiner, E.; Underwood, T.; Kuenzler, T.; Georg, D.; Birkfellner, W.

    2016-03-01

    Radiotherapy treatments have changed at a tremendously rapid pace. Dose delivered to the tumor has escalated while organs at risk (OARs) are better spared. The impact of moving tumors during dose delivery has become higher due to very steep dose gradients. Intra-fractional tumor motion has to be managed adequately to reduce errors in dose delivery. For tumors with large motion such as tumors in the lung, tracking is an approach that can reduce position uncertainty. Tumor tracking approaches range from purely image intensity based techniques to motion estimation based on surrogate tracking. Research efforts are often based on custom designed software platforms which take too much time and effort to develop. To address this challenge we have developed an open software platform especially focusing on tumor motion management. FLIRT is a freely available open-source software platform. The core method for tumor tracking is purely intensity based 2D/3D registration. The platform is written in C++ using the Qt framework for the user interface. The performance critical methods are implemented on the graphics processor using the CUDA extension. One registration can be as fast as 90ms (11Hz). This is suitable to track tumors moving due to respiration (~0.3Hz) or heartbeat (~1Hz). Apart from focusing on high performance, the platform is designed to be flexible and easy to use. Current use cases range from tracking feasibility studies, patient positioning and method validation. Such a framework has the potential of enabling the research community to rapidly perform patient studies or try new methods.

  17. 3D in Photoshop The Ultimate Guide for Creative Professionals

    CERN Document Server

    Gee, Zorana

    2010-01-01

    This is the first book of its kind that shows you everything you need to know to create or integrate 3D into your designs using Photoshop CS5 Extended. If you are completely new to 3D, you'll find the great tips and tricks in 3D in Photoshop invaluable as you get started. There is also a wealth of detailed technical insight for those who want more. Written by the true experts - Adobe's own 3D team - and with contributions from some of the best and brightest digital artists working today, this reference guide will help you to create a comprehensive workflow that suits your specific needs. Along

  18. SU-C-18A-04: 3D Markerless Registration of Lung Based On Coherent Point Drift: Application in Image Guided Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Nasehi Tehrani, J; Wang, J [UT Southwestern Medical Center, Dallas, TX (United States); Guo, X [University of Texas at Dallas, Richardson, TX (United States); Yang, Y [The University of New Mexico, New Mexico, NM (United States)

    2014-06-01

    Purpose: This study evaluated a new probabilistic non-rigid registration method called coherent point drift for real time 3D markerless registration of the lung motion during radiotherapy. Method: 4DCT image datasets Dir-lab (www.dir-lab.com) have been used for creating 3D boundary element model of the lungs. For the first step, the 3D surface of the lungs in respiration phases T0 and T50 were segmented and divided into a finite number of linear triangular elements. Each triangle is a two dimensional object which has three vertices (each vertex has three degree of freedom). One of the main features of the lungs motion is velocity coherence so the vertices that creating the mesh of the lungs should also have features and degree of freedom of lung structure. This means that the vertices close to each other tend to move coherently. In the next step, we implemented a probabilistic non-rigid registration method called coherent point drift to calculate nonlinear displacement of vertices between different expiratory phases. Results: The method has been applied to images of 10-patients in Dir-lab dataset. The normal distribution of vertices to the origin for each expiratory stage were calculated. The results shows that the maximum error of registration between different expiratory phases is less than 0.4 mm (0.38 SI, 0.33 mm AP, 0.29 mm RL direction). This method is a reliable method for calculating the vector of displacement, and the degrees of freedom (DOFs) of lung structure in radiotherapy. Conclusions: We evaluated a new 3D registration method for distribution set of vertices inside lungs mesh. In this technique, lungs motion considering velocity coherence are inserted as a penalty in regularization function. The results indicate that high registration accuracy is achievable with CPD. This method is helpful for calculating of displacement vector and analyzing possible physiological and anatomical changes during treatment.

  19. Optical coherence tomography for ultrahigh-resolution 3D imaging of cell development and real-time guiding for photodynamic therapy

    Science.gov (United States)

    Wang, Tianshi; Zhen, Jinggao; Wang, Bo; Xue, Ping

    2009-11-01

    Optical coherence tomography is a new emerging technique for cross-sectional imaging with high spatial resolution of micrometer scale. It enables in vivo and non-invasive imaging with no need to contact the sample and is widely used in biological and clinic application. In this paper optical coherence tomography is demonstrated for both biological and clinic applications. For biological application, a white-light interference microscope is developed for ultrahigh-resolution full-field optical coherence tomography (full-field OCT) to implement 3D imaging of biological tissue. Spatial resolution of 0.9μm×1.1μm (transverse×axial) is achieved A system sensitivity of 85 dB is obtained at an acquisition time of 5s per image. The development of a mouse embryo is studied layer by layer with our ultrahigh-resolution full-filed OCT. For clinic application, a handheld optical coherence tomography system is designed for real-time and in situ imaging of the port wine stains (PWS) patient and supplying surgery guidance for photodynamic therapy (PDT) treatment. The light source with center wavelength of 1310nm, -3 dB wavelength range of 90 nm and optical power of 9mw is utilized. Lateral resolution of 8 μm and axial resolution of 7μm at a rate of 2 frames per second and with 102dB sensitivity are achieved in biological tissue. It is shown that OCT images distinguish very well the normal and PWS tissues in clinic and are good to serve as a valuable diagnosis tool for PDT treatment.

  20. Projective 3D-reconstruction of Uncalibrated Endoscopic Images

    Directory of Open Access Journals (Sweden)

    P. Faltin

    2010-01-01

    Full Text Available The most common medical diagnostic method for urinary bladder cancer is cystoscopy. This inspection of the bladder is performed by a rigid endoscope, which is usually guided close to the bladder wall. This causes a very limited field of view; difficulty of navigation is aggravated by the usage of angled endoscopes. These factors cause difficulties in orientation and visual control. To overcome this problem, the paper presents a method for extracting 3D information from uncalibrated endoscopic image sequences and for reconstructing the scene content. The method uses the SURF-algorithm to extract features from the images and relates the images by advanced matching. To stabilize the matching, the epipolar geometry is extracted for each image pair using a modified RANSAC-algorithm. Afterwards these matched point pairs are used to generate point triplets over three images and to describe the trifocal geometry. The 3D scene points are determined by applying triangulation to the matched image points. Thus, these points are used to generate a projective 3D reconstruction of the scene, and provide the first step for further metric reconstructions.

  1. 3D Guided Wave Motion Analysis on Laminated Composites

    Science.gov (United States)

    Tian, Zhenhua; Leckey, Cara; Yu, Lingyu

    2013-01-01

    Ultrasonic guided waves have proved useful for structural health monitoring (SHM) and nondestructive evaluation (NDE) due to their ability to propagate long distances with less energy loss compared to bulk waves and due to their sensitivity to small defects in the structure. Analysis of actively transmitted ultrasonic signals has long been used to detect and assess damage. However, there remain many challenging tasks for guided wave based SHM due to the complexity involved with propagating guided waves, especially in the case of composite materials. The multimodal nature of the ultrasonic guided waves complicates the related damage analysis. This paper presents results from parallel 3D elastodynamic finite integration technique (EFIT) simulations used to acquire 3D wave motion in the subject laminated carbon fiber reinforced polymer composites. The acquired 3D wave motion is then analyzed by frequency-wavenumber analysis to study the wave propagation and interaction in the composite laminate. The frequency-wavenumber analysis enables the study of individual modes and visualization of mode conversion. Delamination damage has been incorporated into the EFIT model to generate "damaged" data. The potential for damage detection in laminated composites is discussed in the end.

  2. Handbook of 3D machine vision optical metrology and imaging

    CERN Document Server

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  3. Progress in 3D imaging and display by integral imaging

    Science.gov (United States)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  4. Vision-Guided Robot Control for 3D Object Recognition and Manipulation

    OpenAIRE

    S. Q. Xie; Haemmerle, E.; Cheng, Y; Gamage, P

    2008-01-01

    Research into a fully automated vision-guided robot for identifying, visualising and manipulating 3D objects with complicated shapes is still undergoing major development world wide. The current trend is toward the development of more robust, intelligent and flexible vision-guided robot systems to operate in highly dynamic environments. The theoretical basis of image plane dynamics and robust image-based robot systems capable of manipulating moving objects still need further research. Researc...

  5. Perception of detail in 3D images

    NARCIS (Netherlands)

    Heyndrickx, I.; Kaptein, R.

    2009-01-01

    A lot of current 3D displays suffer from the fact that their spatial resolution is lower compared to their 2D counterparts. One reason for this is that the multiple views needed to generate 3D are often spatially multiplexed. Besides this, imperfect separation of the left- and right-eye view leads t

  6. Super deep 3D images from a 3D omnifocus video camera.

    Science.gov (United States)

    Iizuka, Keigo

    2012-02-20

    When using stereographic image pairs to create three-dimensional (3D) images, a deep depth of field in the original scene enhances the depth perception in the 3D image. The omnifocus video camera has no depth of field limitations and produces images that are in focus throughout. By installing an attachment on the omnifocus video camera, real-time super deep stereoscopic pairs of video images were obtained. The deeper depth of field creates a larger perspective image shift, which makes greater demands on the binocular fusion of human vision. A means of reducing the perspective shift without harming the depth of field was found.

  7. 3D Image Synthesis for B—Reps Objects

    Institute of Scientific and Technical Information of China (English)

    黄正东; 彭群生; 等

    1991-01-01

    This paper presents a new algorithm for generating 3D images of B-reps objects with trimmed surface boundaries.The 3D image is a discrete voxel-map representation within a Cubic Frame Buffer (CFB).The definition of 3D images for curve,surface and solid object are introduced which imply the connectivity and fidelity requirements.Adaptive Forward Differencing matrix (AFD-matrix) for 1D-3D manifolds in 3D space is developed.By setting rules to update the AFD-matrix,the forward difference direction and stepwise can be adjusted.Finally,an efficient algorithm is presented based on the AFD-matrix concept for converting the object in 3D space to 3D image in 3D discrete space.

  8. IMAGE SELECTION FOR 3D MEASUREMENT BASED ON NETWORK DESIGN

    Directory of Open Access Journals (Sweden)

    T. Fuse

    2015-05-01

    Full Text Available 3D models have been widely used by spread of many available free-software. On the other hand, enormous images can be easily acquired, and images are utilized for creating the 3D models recently. However, the creation of 3D models by using huge amount of images takes a lot of time and effort, and then efficiency for 3D measurement are required. In the efficiency strategy, the accuracy of the measurement is also required. This paper develops an image selection method based on network design that means surveying network construction. The proposed method uses image connectivity graph. By this, the image selection problem is regarded as combinatorial optimization problem and the graph cuts technique can be applied. Additionally, in the process of 3D reconstruction, low quality images and similarity images are extracted and removed. Through the experiments, the significance of the proposed method is confirmed. Potential to efficient and accurate 3D measurement is implied.

  9. 3D spatial resolution and spectral resolution of interferometric 3D imaging spectrometry.

    Science.gov (United States)

    Obara, Masaki; Yoshimori, Kyu

    2016-04-01

    Recently developed interferometric 3D imaging spectrometry (J. Opt. Soc. Am A18, 765 [2001]1084-7529JOAOD610.1364/JOSAA.18.000765) enables obtainment of the spectral information and 3D spatial information for incoherently illuminated or self-luminous object simultaneously. Using this method, we can obtain multispectral components of complex holograms, which correspond directly to the phase distribution of the wavefronts propagated from the polychromatic object. This paper focuses on the analysis of spectral resolution and 3D spatial resolution in interferometric 3D imaging spectrometry. Our analysis is based on a novel analytical impulse response function defined over four-dimensional space. We found that the experimental results agree well with the theoretical prediction. This work also suggests a new criterion and estimate method regarding 3D spatial resolution of digital holography. PMID:27139648

  10. Automatic 2D-to-3D image conversion using 3D examples from the internet

    Science.gov (United States)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  11. Multiple 2D video/3D medical image registration algorithm

    Science.gov (United States)

    Clarkson, Matthew J.; Rueckert, Daniel; Hill, Derek L.; Hawkes, David J.

    2000-06-01

    In this paper we propose a novel method to register at least two vide images to a 3D surface model. The potential applications of such a registration method could be in image guided surgery, high precision radiotherapy, robotics or computer vision. Registration is performed by optimizing a similarity measure with respect to the pose parameters. The similarity measure is based on 'photo-consistency' and computes for each surface point, how consistent the corresponding video image information in each view is with a lighting model. We took four video views of a volunteer's face, and used an independent method to reconstruct a surface that was intrinsically registered to the four views. In addition, we extracted a skin surface from the volunteer's MR scan. The surfaces were misregistered from a gold standard pose and our algorithm was used to register both types of surfaces to the video images. For the reconstructed surface, the mean 3D error was 1.53 mm. For the MR surface, the standard deviation of the pose parameters after registration ranged from 0.12 to 0.70 mm and degrees. The performance of the algorithm is accurate, precise and robust.

  12. iClone 431 3D Animation Beginner's Guide

    CERN Document Server

    McCallum, MD

    2011-01-01

    This book is a part of the Beginner's guide series, wherein you will quickly start doing tasks with precise instructions. Then the tasks will be followed by explanation and then a challenging task or a multiple choice question about the topic just covered. Do you have a story to tell or an idea to illustrate? This book is aimed at film makers, video producers/compositors, vxf artists or 3D artists/designers like you who have no previous experience with iClone. If you have that drive inside you to entertain people via the internet on sites like YouTube or Vimeo, create a superb presentation vid

  13. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  14. Computer-based image analysis in radiological diagnostics and image-guided therapy 3D-Reconstruction, contrast medium dynamics, surface analysis, radiation therapy and multi-modal image fusion

    CERN Document Server

    Beier, J

    2001-01-01

    This book deals with substantial subjects of postprocessing and analysis of radiological image data, a particular emphasis was put on pulmonary themes. For a multitude of purposes the developed methods and procedures can directly be transferred to other non-pulmonary applications. The work presented here is structured in 14 chapters, each describing a selected complex of research. The chapter order reflects the sequence of the processing steps starting from artefact reduction, segmentation, visualization, analysis, therapy planning and image fusion up to multimedia archiving. In particular, this includes virtual endoscopy with three different scene viewers (Chap. 6), visualizations of the lung disease bronchiectasis (Chap. 7), surface structure analysis of pulmonary tumors (Chap. 8), quantification of contrast medium dynamics from temporal 2D and 3D image sequences (Chap. 9) as well as multimodality image fusion of arbitrary tomographical data using several visualization techniques (Chap. 12). Thus, the softw...

  15. A 3D image analysis tool for SPECT imaging

    Science.gov (United States)

    Kontos, Despina; Wang, Qiang; Megalooikonomou, Vasileios; Maurer, Alan H.; Knight, Linda C.; Kantor, Steve; Fisher, Robert S.; Simonian, Hrair P.; Parkman, Henry P.

    2005-04-01

    We have developed semi-automated and fully-automated tools for the analysis of 3D single-photon emission computed tomography (SPECT) images. The focus is on the efficient boundary delineation of complex 3D structures that enables accurate measurement of their structural and physiologic properties. We employ intensity based thresholding algorithms for interactive and semi-automated analysis. We also explore fuzzy-connectedness concepts for fully automating the segmentation process. We apply the proposed tools to SPECT image data capturing variation of gastric accommodation and emptying. These image analysis tools were developed within the framework of a noninvasive scintigraphic test to measure simultaneously both gastric emptying and gastric volume after ingestion of a solid or a liquid meal. The clinical focus of the particular analysis was to probe associations between gastric accommodation/emptying and functional dyspepsia. Employing the proposed tools, we outline effectively the complex three dimensional gastric boundaries shown in the 3D SPECT images. We also perform accurate volume calculations in order to quantitatively assess the gastric mass variation. This analysis was performed both with the semi-automated and fully-automated tools. The results were validated against manual segmentation performed by a human expert. We believe that the development of an automated segmentation tool for SPECT imaging of the gastric volume variability will allow for other new applications of SPECT imaging where there is a need to evaluate complex organ function or tumor masses.

  16. 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    of planetary surfaces, but other purposes is considered as well. The system performance is measured with respect to the precision and the time consumption.The reconstruction process is divided into four major areas: Acquisition, calibration, matching/reconstruction and presentation. Each of these areas......, where various methods have been tested in order to optimize the performance. The match results are used in the reconstruction part to establish a 3-D digital representation and finally, different presentation forms are discussed....

  17. Light field display and 3D image reconstruction

    Science.gov (United States)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  18. Dynamic contrast-enhanced 3D photoacoustic imaging

    Science.gov (United States)

    Wong, Philip; Kosik, Ivan; Carson, Jeffrey J. L.

    2013-03-01

    Photoacoustic imaging (PAI) is a hybrid imaging modality that integrates the strengths from both optical imaging and acoustic imaging while simultaneously overcoming many of their respective weaknesses. In previous work, we reported on a real-time 3D PAI system comprised of a 32-element hemispherical array of transducers. Using the system, we demonstrated the ability to capture photoacoustic data, reconstruct a 3D photoacoustic image, and display select slices of the 3D image every 1.4 s, where each 3D image resulted from a single laser pulse. The present study aimed to exploit the rapid imaging speed of an upgraded 3D PAI system by evaluating its ability to perform dynamic contrast-enhanced imaging. The contrast dynamics can provide rich datasets that contain insight into perfusion, pharmacokinetics and physiology. We captured a series of 3D PA images of a flow phantom before and during injection of piglet and rabbit blood. Principal component analysis was utilized to classify the data according to its spatiotemporal information. The results suggested that this technique can be used to separate a sequence of 3D PA images into a series of images representative of main features according to spatiotemporal flow dynamics.

  19. Full Parallax Integral 3D Display and Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Byung-Gook Lee

    2015-02-01

    Full Text Available Purpose – Full parallax integral 3D display is one of the promising future displays that provide different perspectives according to viewing direction. In this paper, the authors review the recent integral 3D display and image processing techniques for improving the performance, such as viewing resolution, viewing angle, etc.Design/methodology/approach – Firstly, to improve the viewing resolution of 3D images in the integral imaging display with lenslet array, the authors present 3D integral imaging display with focused mode using the time-multiplexed display. Compared with the original integral imaging with focused mode, the authors use the electrical masks and the corresponding elemental image set. In this system, the authors can generate the resolution-improved 3D images with the n×n pixels from each lenslet by using n×n time-multiplexed display. Secondly, a new image processing technique related to the elemental image generation for 3D scenes is presented. With the information provided by the Kinect device, the array of elemental images for an integral imaging display is generated.Findings – From their first work, the authors improved the resolution of 3D images by using the time-multiplexing technique through the demonstration of the 24 inch integral imaging system. Authors’ method can be applied to a practical application. Next, the proposed method with the Kinect device can gain a competitive advantage over other methods for the capture of integral images of big 3D scenes. The main advantage of fusing the Kinect and the integral imaging concepts is the acquisition speed, and the small amount of handled data.Originality / Value – In this paper, the authors review their recent methods related to integral 3D display and image processing technique.Research type – general review.

  20. 3D Imaging with Structured Illumination for Advanced Security Applications

    Energy Technology Data Exchange (ETDEWEB)

    Birch, Gabriel Carisle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dagel, Amber Lynn [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kast, Brian A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Collin S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  1. 3D augmented reality with integral imaging display

    Science.gov (United States)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  2. Calibration of Images with 3D range scanner data

    OpenAIRE

    Adalid López, Víctor Javier

    2009-01-01

    Projecte fet en col.laboració amb EPFL 3D laser range scanners are used in extraction of the 3D data in a scene. Main application areas are architecture, archeology and city planning. Thought the raw scanner data has a gray scale values, the 3D data can be merged with colour camera image values to get textured 3D model of the scene. Also these devices are able to take a reliable copy in 3D form objects, with a high level of accuracy. Therefore, they scanned scenes can be use...

  3. De la manipulation des images 3D

    Directory of Open Access Journals (Sweden)

    Geneviève Pinçon

    2012-04-01

    Full Text Available Si les technologies 3D livrent un enregistrement précis et pertinent des graphismes pariétaux, elles offrent également des applications particulièrement intéressantes pour leur analyse. À travers des traitements sur nuage de points et des simulations, elles autorisent un large éventail de manipulations touchant autant à l’observation qu’à l’étude des œuvres pariétales. Elles permettent notamment une perception affinée de leur volumétrie, et deviennent des outils de comparaison de formes très utiles dans la reconstruction des chronologies pariétales et dans l’appréhension des analogies entre différents sites. Ces outils analytiques sont ici illustrés par les travaux originaux menés sur les sculptures pariétales des abris du Roc-aux-Sorciers (Angles-sur-l’Anglin, Vienne et de la Chaire-à-Calvin (Mouthiers-sur-Boëme, Charente.If 3D technologies allow an accurate and relevant recording of rock art, they also offer several interesting applications for its analysis. Through spots clouds treatments and simulations, they permit a wide range of manipulations concerning figurations observation and study. Especially, they allow a fine perception of their volumetry. They become efficient tools for forms comparisons, very useful in the reconstruction of graphic ensemble chronologies and for inter-sites analogies. These analytical tools are illustrated by the original works done on the sculptures of Roc-aux-Sorciers (Angles-sur-l’Anglin, Vienne and Chaire-à-Calvin (Mouthiers-sur-Boëme, Charente rock-shelters.

  4. 3D Interpolation Method for CT Images of the Lung

    Directory of Open Access Journals (Sweden)

    Noriaki Asada

    2003-06-01

    Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.

  5. Visualizing Vertebrate Embryos with Episcopic 3D Imaging Techniques

    Directory of Open Access Journals (Sweden)

    Stefan H. Geyer

    2009-01-01

    Full Text Available The creation of highly detailed, three-dimensional (3D computer models is essential in order to understand the evolution and development of vertebrate embryos, and the pathogenesis of hereditary diseases. A still-increasing number of methods allow for generating digital volume data sets as the basis of virtual 3D computer models. This work aims to provide a brief overview about modern volume data–generation techniques, focusing on episcopic 3D imaging methods. The technical principles, advantages, and problems of episcopic 3D imaging are described. The strengths and weaknesses in its ability to visualize embryo anatomy and labeled gene product patterns, specifically, are discussed.

  6. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    OpenAIRE

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Ferin, Guillaume; Dufait, Remi; Jensen, Jørgen Arendt

    2012-01-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32x32 element prototype transducer. The transducer mimicked is a dense matrix phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60 in both the azimuth and elevation direction and 150mm in depth. ...

  7. Highway 3D model from image and lidar data

    Science.gov (United States)

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  8. Prospective comparison of T2w-MRI and dynamic-contrast-enhanced MRI, 3D-MR spectroscopic imaging or diffusion-weighted MRI in repeat TRUS-guided biopsies

    International Nuclear Information System (INIS)

    To compare T2-weighted MRI and functional MRI techniques in guiding repeat prostate biopsies. Sixty-eight patients with a history of negative biopsies, negative digital rectal examination and elevated PSA were imaged before repeat biopsies. Dichotomous criteria were used with visual validation of T2-weighted MRI, dynamic contrast-enhanced MRI and literature-derived cut-offs for 3D-spectroscopy MRI (choline-creatine-to-citrate ratio >0.86) and diffusion-weighted imaging (ADC x 103 mm2/s < 1.24). For each segment and MRI technique, results were rendered as being suspicious/non-suspicious for malignancy. Sextant biopsies, transition zone biopsies and at least two additional biopsies of suspicious areas were taken. In the peripheral zones, 105/408 segments and in the transition zones 19/136 segments were suspicious according to at least one MRI technique. A total of 28/68 (41.2%) patients were found to have cancer. Diffusion-weighted imaging exhibited the highest positive predictive value (0.52) compared with T2-weighted MRI (0.29), dynamic contrast-enhanced MRI (0.33) and 3D-spectroscopy MRI (0.25). Logistic regression showed the probability of cancer in a segment increasing 12-fold when T2-weighted and diffusion-weighted imaging MRI were both suspicious (63.4%) compared with both being non-suspicious (5.2%). The proposed system of analysis and reporting could prove clinically relevant in the decision whether to repeat targeted biopsies. (orig.)

  9. Prospective comparison of T2w-MRI and dynamic-contrast-enhanced MRI, 3D-MR spectroscopic imaging or diffusion-weighted MRI in repeat TRUS-guided biopsies

    Energy Technology Data Exchange (ETDEWEB)

    Portalez, Daniel [Clinique Pasteur, 45, Department of Radiology, Toulouse (France); Rollin, Gautier; Mouly, Patrick; Jonca, Frederic; Malavaud, Bernard [Hopital de Rangueil, Department of Urology, Toulouse Cedex 9 (France); Leandri, Pierre [Clinique Saint Jean, 20, Department of Urology, Toulouse (France); Elman, Benjamin [Clinique Pasteur, 45, Department of Urology, Toulouse (France)

    2010-12-15

    To compare T2-weighted MRI and functional MRI techniques in guiding repeat prostate biopsies. Sixty-eight patients with a history of negative biopsies, negative digital rectal examination and elevated PSA were imaged before repeat biopsies. Dichotomous criteria were used with visual validation of T2-weighted MRI, dynamic contrast-enhanced MRI and literature-derived cut-offs for 3D-spectroscopy MRI (choline-creatine-to-citrate ratio >0.86) and diffusion-weighted imaging (ADC x 10{sup 3} mm{sup 2}/s < 1.24). For each segment and MRI technique, results were rendered as being suspicious/non-suspicious for malignancy. Sextant biopsies, transition zone biopsies and at least two additional biopsies of suspicious areas were taken. In the peripheral zones, 105/408 segments and in the transition zones 19/136 segments were suspicious according to at least one MRI technique. A total of 28/68 (41.2%) patients were found to have cancer. Diffusion-weighted imaging exhibited the highest positive predictive value (0.52) compared with T2-weighted MRI (0.29), dynamic contrast-enhanced MRI (0.33) and 3D-spectroscopy MRI (0.25). Logistic regression showed the probability of cancer in a segment increasing 12-fold when T2-weighted and diffusion-weighted imaging MRI were both suspicious (63.4%) compared with both being non-suspicious (5.2%). The proposed system of analysis and reporting could prove clinically relevant in the decision whether to repeat targeted biopsies. (orig.)

  10. Diffractive optical element for creating visual 3D images.

    Science.gov (United States)

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-01

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc. PMID:27137530

  11. 3-D capacitance density imaging system

    Science.gov (United States)

    Fasching, G.E.

    1988-03-18

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved. 7 figs.

  12. 3D-LSI technology for image sensor

    International Nuclear Information System (INIS)

    Recently, the development of three-dimensional large-scale integration (3D-LSI) technologies has accelerated and has advanced from the research level or the limited production level to the investigation level, which might lead to mass production. By separating 3D-LSI technology into elementary technologies such as (1) through silicon via (TSV) formation, (2) bump formation, (3) wafer thinning, (4) chip/wafer alignment, and (5) chip/wafer stacking and reconstructing the entire process and structure, many methods to realize 3D-LSI devices can be developed. However, by considering a specific application, the supply chain of base wafers, and the purpose of 3D integration, a few suitable combinations can be identified. In this paper, we focus on the application of 3D-LSI technologies to image sensors. We describe the process and structure of the chip size package (CSP), developed on the basis of current and advanced 3D-LSI technologies, to be used in CMOS image sensors. Using the current LSI technologies, CSPs for 1.3 M, 2 M, and 5 M pixel CMOS image sensors were successfully fabricated without any performance degradation. 3D-LSI devices can be potentially employed in high-performance focal-plane-array image sensors. We propose a high-speed image sensor with an optical fill factor of 100% to be developed using next-generation 3D-LSI technology and fabricated using micro(μ)-bumps and micro(μ)-TSVs.

  13. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    Energy Technology Data Exchange (ETDEWEB)

    Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van [Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Punt, Mark; Aben, Jean-Paul [Pie Medical Imaging, 6227 AJ Maastricht (Netherlands); Schultz, Carl [Department of Cardiology, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Niessen, Wiro [Quantitative Imaging Group, Faculty of Applied Sciences, Delft University of Technology, 2628 CJ Delft, The Netherlands and Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands)

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  14. Acoustic 3D imaging of dental structures

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Hume, W.R. [California Univ., Los Angeles, CA (United States); Douglass, G.D. [California Univ., San Francisco, CA (United States)

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  15. Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Directory of Open Access Journals (Sweden)

    Alexander Pacheco

    2014-05-01

    Full Text Available To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects.

  16. A 3D Model Reconstruction Method Using Slice Images

    Institute of Scientific and Technical Information of China (English)

    LI Hong-an; KANG Bao-sheng

    2013-01-01

    Aiming at achieving the high accuracy 3D model from slice images, a new model reconstruction method using slice im-ages is proposed. Wanting to extract the outermost contours from slice images, the method of the improved GVF-Snake model with optimized force field and ray method is employed. And then, the 3D model is reconstructed by contour connection using the im-proved shortest diagonal method and judgment function of contour fracture. The results show that the accuracy of reconstruction 3D model is improved.

  17. 3D Motion Parameters Determination Based on Binocular Sequence Images

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Exactly capturing three dimensional (3D) motion information of an object is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, a binocular vision system and a method for determining 3D motion parameters of an object from binocular sequence images are introduced. The main steps include camera calibration, the matching of motion and stereo images, 3D feature point correspondences and resolving the motion parameters. Finally, the experimental results of acquiring the motion parameters of the objects with uniform velocity and acceleration in the straight line based on the real binocular sequence images by the mentioned method are presented.

  18. 3D Shape Indexing and Retrieval Using Characteristics level images

    Directory of Open Access Journals (Sweden)

    Abdelghni Lakehal

    2012-05-01

    Full Text Available In this paper, we propose an improved version of the descriptor that we proposed before. The descriptor is based on a set of binary images extracted from the 3D model called level images noted LI. The set LI is often bulky, why we introduced the X-means technique to reduce its size instead of K-means used in the old version. A 2D binary image descriptor was introduced to extract the vectors descriptors of the 3D model. For a comparative study of two versions of the descriptor, we used the National Taiwan University (NTU database of 3D object.

  19. Morphometrics, 3D Imaging, and Craniofacial Development.

    Science.gov (United States)

    Hallgrimsson, Benedikt; Percival, Christopher J; Green, Rebecca; Young, Nathan M; Mio, Washington; Marcucio, Ralph

    2015-01-01

    Recent studies have shown how volumetric imaging and morphometrics can add significantly to our understanding of morphogenesis, the developmental basis for variation, and the etiology of structural birth defects. On the other hand, the complex questions and diverse imaging data in developmental biology present morphometrics with more complex challenges than applications in virtually any other field. Meeting these challenges is necessary in order to understand the mechanistic basis for variation in complex morphologies. This chapter reviews the methods and theory that enable the application of modern landmark-based morphometrics to developmental biology and craniofacial development, in particular. We discuss the theoretical foundations of morphometrics as applied to development and review the basic approaches to the quantification of morphology. Focusing on geometric morphometrics, we discuss the principal statistical methods for quantifying and comparing morphological variation and covariation structure within and among groups. Finally, we discuss the future directions for morphometrics in developmental biology that will be required for approaches that enable quantitative integration across the genotype-phenotype map. PMID:26589938

  20. Robot-assisted 3D-TRUS guided prostate brachytherapy: System integration and validation

    International Nuclear Information System (INIS)

    Current transperineal prostate brachytherapy uses transrectal ultrasound (TRUS) guidance and a template at a fixed position to guide needles along parallel trajectories. However, pubic arch interference (PAI) with the implant path obstructs part of the prostate from being targeted by the brachytherapy needles along parallel trajectories. To solve the PAI problem, some investigators have explored other insertion trajectories than parallel, i.e., oblique. However, parallel trajectory constraints in current brachytherapy procedure do not allow oblique insertion. In this paper, we describe a robot-assisted, three-dimensional (3D) TRUS guided approach to solve this problem. Our prototype consists of a commercial robot, and a 3D TRUS imaging system including an ultrasound machine, image acquisition apparatus and 3D TRUS image reconstruction, and display software. In our approach, we use the robot as a movable needle guide, i.e., the robot positions the needle before insertion, but the physician inserts the needle into the patient's prostate. In a later phase of our work, we will include robot insertion. By unifying the robot, ultrasound transducer, and the 3D TRUS image coordinate systems, the position of the template hole can be accurately related to 3D TRUS image coordinate system, allowing accurate and consistent insertion of the needle via the template hole into the targeted position in the prostate. The unification of the various coordinate systems includes two steps, i.e., 3D image calibration and robot calibration. Our testing of the system showed that the needle placement accuracy of the robot system at the 'patient's' skin position was 0.15 mm±0.06 mm, and the mean needle angulation error was 0.07 deg. . The fiducial localization error (FLE) in localizing the intersections of the nylon strings for image calibration was 0.13 mm, and the FLE in localizing the divots for robot calibration was 0.37 mm. The fiducial registration error for image calibration was 0

  1. The Essential Guide to 3D in Flash

    CERN Document Server

    Olsson, Ronald A

    2010-01-01

    If you are an ActionScript developer or designer and you would like to work with 3D in Flash, this book is for you. You will learn the core Flash 3D concepts, using the open source Away3D engine as a primary tool. Once you have mastered these skills, you will be able to realize the possibilities that the available Flash 3D engines, languages, and technologies have to offer you with Flash and 3D.* Describes 3D concepts in theory and their implementation using Away3D* Dives right in to show readers how to quickly create an interactive, animated 3D scene, and builds on that experience throughout

  2. 3D-guided CT reconstruction using time-of-flight camera

    Science.gov (United States)

    Ismail, Mahmoud; Taguchi, Katsuyuki; Xu, Jingyan; Tsui, Benjamin M. W.; Boctor, Emad M.

    2011-03-01

    We propose the use of a time-of-flight (TOF) camera to obtain the patient's body contour in 3D guided imaging reconstruction scheme in CT and C-arm imaging systems with truncated projection. In addition to pixel intensity, a TOF camera provides the 3D coordinates of each point in the captured scene with respect to the camera coordinates. Information from the TOF camera was used to obtain a digitized surface of the patient's body. The digitization points are transformed to X-Ray detector coordinates by registering the two coordinate systems. A set of points corresponding to the slice of interest are segmented to form a 2D contour of the body surface. Radon transform is applied to the contour to generate the 'trust region' for the projection data. The generated 'trust region' is integrated as an input to augment the projection data. It is used to estimate the truncated, unmeasured projections using linear interpolation. Finally the image is reconstructed using the combination of the estimated and the measured projection data. The proposed method is evaluated using a physical phantom. Projection data for the phantom were obtained using a C-arm system. Significant improvement in the reconstructed image quality near the truncation edges was observed using the proposed method as compared to that without truncation correction. This work shows that the proposed 3D guided CT image reconstruction using a TOF camera represents a feasible solution to the projection data truncation problem.

  3. Data Processing for 3D Mass Spectrometry Imaging

    Science.gov (United States)

    Xiong, Xingchuang; Xu, Wei; Eberlin, Livia S.; Wiseman, Justin M.; Fang, Xiang; Jiang, You; Huang, Zejian; Zhang, Yukui; Cooks, R. Graham; Ouyang, Zheng

    2012-06-01

    Data processing for three dimensional mass spectrometry (3D-MS) imaging was investigated, starting with a consideration of the challenges in its practical implementation using a series of sections of a tissue volume. The technical issues related to data reduction, 2D imaging data alignment, 3D visualization, and statistical data analysis were identified. Software solutions for these tasks were developed using functions in MATLAB. Peak detection and peak alignment were applied to reduce the data size, while retaining the mass accuracy. The main morphologic features of tissue sections were extracted using a classification method for data alignment. Data insertion was performed to construct a 3D data set with spectral information that can be used for generating 3D views and for data analysis. The imaging data previously obtained for a mouse brain using desorption electrospray ionization mass spectrometry (DESI-MS) imaging have been used to test and demonstrate the new methodology.

  4. Image based 3D city modeling : Comparative study

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  5. 3D Image Modelling and Specific Treatments in Orthodontics Domain

    Directory of Open Access Journals (Sweden)

    Dionysis Goularas

    2007-01-01

    Full Text Available In this article, we present a 3D specific dental plaster treatment system for orthodontics. From computer tomography scanner images, we propose first a 3D image modelling and reconstruction method of the Mandible and Maxillary based on an adaptive triangulation allowing management of contours meant for the complex topologies. Secondly, we present two specific treatment methods directly achieved on obtained 3D model allowing the automatic correction for the setting in occlusion of the Mandible and the Maxillary, and the teeth segmentation allowing more specific dental examinations. Finally, these specific treatments are presented via a client/server application with the aim of allowing a telediagnosis and treatment.

  6. Imaging fault zones using 3D seismic image processing techniques

    Science.gov (United States)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  7. Optical 3D watermark based digital image watermarking for telemedicine

    Science.gov (United States)

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  8. Fully Automatic 3D Reconstruction of Histological Images

    CERN Document Server

    Bagci, Ulas

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.

  9. A 3D surface imaging system for assessing human obesity

    Science.gov (United States)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  10. 3D Medical Image Segmentation Based on Rough Set Theory

    Institute of Scientific and Technical Information of China (English)

    CHEN Shi-hao; TIAN Yun; WANG Yi; HAO Chong-yang

    2007-01-01

    This paper presents a method which uses multiple types of expert knowledge together in 3D medical image segmentation based on rough set theory. The focus of this paper is how to approximate a ROI (region of interest) when there are multiple types of expert knowledge. Based on rough set theory, the image can be split into three regions:positive regions; negative regions; boundary regions. With multiple knowledge we refine ROI as an intersection of all of the expected shapes with single knowledge. At last we show the results of implementing a rough 3D image segmentation and visualization system.

  11. A near field 3D radar imaging technique

    OpenAIRE

    Broquetas Ibars, Antoni

    1993-01-01

    The paper presents an algorithm which recovers a 3D reflectivity image of a target from near-field scattering measurements. Spherical wave nearfield illumination is used, in order to avoid a costly compact range installation to produce a plane wave illumination. The system is described and some simulated 3D reconstructions are included. The paper also presents a first experimental validation of this technique. Peer Reviewed

  12. A Texture Analysis of 3D Radar Images

    NARCIS (Netherlands)

    Deiana, D.; Yarovoy, A.

    2009-01-01

    In this paper a texture feature coding method to be applied to high-resolution 3D radar images in order to improve target detection is developed. An automatic method for image segmentation based on texture features is proposed. The method has been able to automatically detect weak targets which fail

  13. Hybrid segmentation framework for 3D medical image analysis

    Science.gov (United States)

    Chen, Ting; Metaxas, Dimitri N.

    2003-05-01

    Medical image segmentation is the process that defines the region of interest in the image volume. Classical segmentation methods such as region-based methods and boundary-based methods cannot make full use of the information provided by the image. In this paper we proposed a general hybrid framework for 3D medical image segmentation purposes. In our approach we combine the Gibbs Prior model, and the deformable model. First, Gibbs Prior models are applied onto each slice in a 3D medical image volume and the segmentation results are combined to a 3D binary masks of the object. Then we create a deformable mesh based on this 3D binary mask. The deformable model will be lead to the edge features in the volume with the help of image derived external forces. The deformable model segmentation result can be used to update the parameters for Gibbs Prior models. These methods will then work recursively to reach a global segmentation solution. The hybrid segmentation framework has been applied to images with the objective of lung, heart, colon, jaw, tumor, and brain. The experimental data includes MRI (T1, T2, PD), CT, X-ray, Ultra-Sound images. High quality results are achieved with relatively efficient time cost. We also did validation work using expert manual segmentation as the ground truth. The result shows that the hybrid segmentation may have further clinical use.

  14. Development of a 3D ultrasound-guided prostate biopsy system

    Science.gov (United States)

    Cool, Derek; Sherebrin, Shi; Izawa, Jonathan; Fenster, Aaron

    2007-03-01

    Biopsy of the prostate using ultrasound guidance is the clinical gold standard for diagnosis of prostate adenocarinoma. However, because early stage tumors are rarely visible under US, the procedure carries high false-negative rates and often patients require multiple biopsies before cancer is detected. To improve cancer detection, it is imperative that throughout the biopsy procedure, physicians know where they are within the prostate and where they have sampled during prior biopsies. The current biopsy procedure is limited to using only 2D ultrasound images to find and record target biopsy core sample sites. This information leaves ambiguity as the physician tries to interpret the 2D information and apply it to their 3D workspace. We have developed a 3D ultrasound-guided prostate biopsy system that provides 3D intra-biopsy information to physicians for needle guidance and biopsy location recording. The system is designed to conform to the workflow of the current prostate biopsy procedure, making it easier for clinical integration. In this paper, we describe the system design and validate its accuracy by performing an in vitro biopsy procedure on US/CT multi-modal patient-specific prostate phantoms. A clinical sextant biopsy was performed by a urologist on the phantoms and the 3D models of the prostates were generated with volume errors less than 4% and mean boundary errors of less than 1 mm. Using the 3D biopsy system, needles were guided to within 1.36 +/- 0.83 mm of 3D targets and the position of the biopsy sites were accurately localized to 1.06 +/- 0.89 mm for the two prostates.

  15. The European Society of Therapeutic Radiology and Oncology-European Institute of Radiotherapy (ESTRO-EIR) report on 3D CT-based in-room image guidance systems: a practical and technical review and guide.

    Science.gov (United States)

    Korreman, Stine; Rasch, Coen; McNair, Helen; Verellen, Dirk; Oelfke, Uwe; Maingon, Philippe; Mijnheer, Ben; Khoo, Vincent

    2010-02-01

    The past decade has provided many technological advances in radiotherapy. The European Institute of Radiotherapy (EIR) was established by the European Society of Therapeutic Radiology and Oncology (ESTRO) to provide current consensus statement with evidence-based and pragmatic guidelines on topics of practical relevance for radiation oncology. This report focuses primarily on 3D CT-based in-room image guidance (3DCT-IGRT) systems. It will provide an overview and current standing of 3DCT-IGRT systems addressing the rationale, objectives, principles, applications, and process pathways, both clinical and technical for treatment delivery and quality assurance. These are reviewed for four categories of solutions; kV CT and kV CBCT (cone-beam CT) as well as MV CT and MV CBCT. It will also provide a framework and checklist to consider the capability and functionality of these systems as well as the resources needed for implementation. Two different but typical clinical cases (tonsillar and prostate cancer) using 3DCT-IGRT are illustrated with workflow processes via feedback questionnaires from several large clinical centres currently utilizing these systems. The feedback from these clinical centres demonstrates a wide variability based on local practices. This report whilst comprehensive is not exhaustive as this area of development remains a very active field for research and development. However, it should serve as a practical guide and framework for all professional groups within the field, focussed on clinicians, physicists and radiation therapy technologists interested in IGRT.

  16. 3-D ultrasound-guided robotic needle steering in biological tissue.

    Science.gov (United States)

    Adebar, Troy K; Fletcher, Ashley E; Okamura, Allison M

    2014-12-01

    Robotic needle steering systems have the potential to greatly improve medical interventions, but they require new methods for medical image guidance. Three-dimensional (3-D) ultrasound is a widely available, low-cost imaging modality that may be used to provide real-time feedback to needle steering robots. Unfortunately, the poor visibility of steerable needles in standard grayscale ultrasound makes automatic segmentation of the needles impractical. A new imaging approach is proposed, in which high-frequency vibration of a steerable needle makes it visible in ultrasound Doppler images. Experiments demonstrate that segmentation from this Doppler data is accurate to within 1-2 mm. An image-guided control algorithm that incorporates the segmentation data as feedback is also described. In experimental tests in ex vivo bovine liver tissue, a robotic needle steering system implementing this control scheme was able to consistently steer a needle tip to a simulated target with an average error of 1.57 mm. Implementation of 3-D ultrasound-guided needle steering in biological tissue represents a significant step toward the clinical application of robotic needle steering.

  17. DATA PROCESSING TECHNOLOGY OF AIRBORNE 3D IMAGE

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Airborne 3D image which integrates GPS,attitude measurement unit (AMU),sca nning laser rangefinder (SLR) and spectral scanner has been developed successful ly.The spectral scanner and SLR use the same optical system which ensures laser point to match pixel seamlessly.The distinctive advantage of 3D image is that it can produce geo_referenced images and DSM (digital surface models) images wi thout any ground control points (GCPs).It is no longer necessary to sur vey GCPs and with some softwares the data can be processed and produce digital s urface models (DSM) and geo_referenced images in quasi_real_time,therefore,the efficiency of 3 D image is 10~100 times higher than that of traditional approaches.The process ing procedure involves decomposing and checking the raw data,processing GPS dat a,calculating the positions of laser sample points,producing geo_referenced im age,producing DSM and mosaicing strips.  The principle of 3D image is first introduced in this paper,and then we focus on the fast processing technique and algorithm.The flight tests and processed r esults show that the processing technique is feasible and can meet the requireme nt of quasi_real_time applications.

  18. 3D image analysis of abdominal aortic aneurysm

    Science.gov (United States)

    Subasic, Marko; Loncaric, Sven; Sorantin, Erich

    2002-05-01

    This paper presents a method for 3-D segmentation of abdominal aortic aneurysm from computed tomography angiography images. The proposed method is automatic and requires minimal user assistance. Segmentation is performed in two steps. First inner and then outer aortic border is segmented. Those two steps are different due to different image conditions on two aortic borders. Outputs of these two segmentations give a complete 3-D model of abdominal aorta. Such a 3-D model is used in measurements of aneurysm area. The deformable model is implemented using the level-set algorithm due to its ability to describe complex shapes in natural manner which frequently occur in pathology. In segmentation of outer aortic boundary we introduced some knowledge based preprocessing to enhance and reconstruct low contrast aortic boundary. The method has been implemented in IDL and C languages. Experiments have been performed using real patient CTA images and have shown good results.

  19. Impulse Turbine with 3D Guide Vanes for Wave Energy Conversion

    Institute of Scientific and Technical Information of China (English)

    Manabu TAKAO; Toshiaki SETOGUCHI; Kenji KANEKO; Shuichi NAGATA

    2006-01-01

    In this study, in order to achieve further improvement of the performance of an impulse turbine with fixed guide vanes for wave energy conversion, the effect of guide vane shape on the performance was investigated by experiment. The investigation was performed by model testing under steady flow condition. As a result, it was found that the efficiency of the turbine with 3D guide vanes are slightly superior to that of the turbine with 2D guide vanes because of the increase of torque by means of 3D guide vane, though pressure drop across the turbine for the 3D case is slightly higher than that for the 2D case.

  20. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    Directory of Open Access Journals (Sweden)

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  1. Laboratory 3D Micro-XRF/Micro-CT Imaging System

    Science.gov (United States)

    Bruyndonckx, P.; Sasov, A.; Liu, X.

    2011-09-01

    A prototype micro-XRF laboratory system based on pinhole imaging was developed to produce 3D elemental maps. The fluorescence x-rays are detected by a deep-depleted CCD camera operating in photon-counting mode. A charge-clustering algorithm, together with dynamically adjusted exposure times, ensures a correct energy measurement. The XRF component has a spatial resolution of 70 μm and an energy resolution of 180 eV at 6.4 keV. The system is augmented by a micro-CT imaging modality. This is used for attenuation correction of the XRF images and to co-register features in the 3D XRF images with morphological structures visible in the volumetric CT images of the object.

  2. A compact mechatronic system for 3D ultrasound guided prostate interventions

    International Nuclear Information System (INIS)

    Purpose: Ultrasound imaging has improved the treatment of prostate cancer by producing increasingly higher quality images and influencing sophisticated targeting procedures for the insertion of radioactive seeds during brachytherapy. However, it is critical that the needles be placed accurately within the prostate to deliver the therapy to the planned location and avoid complications of damaging surrounding tissues. Methods: The authors have developed a compact mechatronic system, as well as an effective method for guiding and controlling the insertion of transperineal needles into the prostate. This system has been designed to allow guidance of a needle obliquely in 3D space into the prostate, thereby reducing pubic arch interference. The choice of needle trajectory and location in the prostate can be adjusted manually or with computer control. Results: To validate the system, a series of experiments were performed on phantoms. The 3D scan of the string phantom produced minimal geometric error, which was less than 0.4 mm. Needle guidance accuracy tests in agar prostate phantoms showed that the mean error of bead placement was less then 1.6 mm along parallel needle paths that were within 1.2 mm of the intended target and 1 deg. from the preplanned trajectory. At oblique angles of up to 15 deg. relative to the probe axis, beads were placed to within 3.0 mm along a trajectory that were within 2.0 mm of the target with an angular error less than 2 deg. Conclusions: By combining 3D TRUS imaging system to a needle tracking linkage, this system should improve the physician's ability to target and accurately guide a needle to selected targets without the need for the computer to directly manipulate and insert the needle. This would be beneficial as the physician has complete control of the system and can safely maneuver the needle guide around obstacles such as previously placed needles.

  3. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system

    CERN Document Server

    Baumann, Michael; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space a...

  4. 3D- VISUALIZATION BY RAYTRACING IMAGE SYNTHESIS ON GPU

    Directory of Open Access Journals (Sweden)

    Al-Oraiqat Anas M.

    2016-06-01

    Full Text Available This paper presents a realization of the approach to spatial 3D stereo of visualization of 3D images with use parallel Graphics processing unit (GPU. The experiments of realization of synthesis of images of a 3D stage by a method of trace of beams on GPU with Compute Unified Device Architecture (CUDA have shown that 60 % of the time is spent for the decision of a computing problem approximately, the major part of time (40 % is spent for transfer of data between the central processing unit and GPU for calculations and the organization process of visualization. The study of the influence of increase in the size of the GPU network at the speed of calculations showed importance of the correct task of structure of formation of the parallel computer network and general mechanism of parallelization.

  5. 2D-3D rigid registration to compensate for prostate motion during 3D TRUS-guided biopsy

    Science.gov (United States)

    De Silva, Tharindu; Fenster, Aaron; Bax, Jeffrey; Gardi, Lori; Romagnoli, Cesare; Samarabandu, Jagath; Ward, Aaron D.

    2012-02-01

    Prostate biopsy is the clinical standard for prostate cancer diagnosis. To improve the accuracy of targeting suspicious locations, systems have been developed that can plan and record biopsy locations in a 3D TRUS image acquired at the beginning of the procedure. Some systems are designed for maximum compatibility with existing ultrasound equipment and are thus designed around the use of a conventional 2D TRUS probe, using controlled axial rotation of this probe to acquire a 3D TRUS reference image at the start of the biopsy procedure. Prostate motion during the biopsy procedure causes misalignments between the prostate in the live 2D TRUS images and the pre-acquired 3D TRUS image. We present an image-based rigid registration technique that aligns live 2D TRUS images, acquired immediately prior to biopsy needle insertion, with the pre-acquired 3D TRUS image to compensate for this motion. Our method was validated using 33 manually identified intrinsic fiducials in eight subjects and the target registration error was found to be 1.89 mm. We analysed the suitability of two image similarity metrics (normalized cross correlation and mutual information) for this task by plotting these metrics as a function of varying parameters in the six degree-of-freedom transformation space, with the ground truth plane obtained from registration as the starting point for the parameter exploration. We observed a generally convex behaviour of the similarity metrics. This encourages their use for this registration problem, and could assist in the design of a tool for the detection of misalignment, which could trigger the execution of a non-real-time registration, when needed during the procedure.

  6. Practical pseudo-3D registration for large tomographic images

    Science.gov (United States)

    Liu, Xuan; Laperre, Kjell; Sasov, Alexander

    2014-09-01

    Image registration is a powerful tool in various tomographic applications. Our main focus is on microCT applications in which samples/animals can be scanned multiple times under different conditions or at different time points. For this purpose, a registration tool capable of handling fairly large volumes has been developed, using a novel pseudo-3D method to achieve fast and interactive registration with simultaneous 3D visualization. To reduce computation complexity in 3D registration, we decompose it into several 2D registrations, which are applied to the orthogonal views (transaxial, sagittal and coronal) sequentially and iteratively. After registration in each view, the next view is retrieved with the new transformation matrix for registration. This reduces the computation complexity significantly. For rigid transform, we only need to search for 3 parameters (2 shifts, 1 rotation) in each of the 3 orthogonal views instead of 6 (3 shifts, 3 rotations) for full 3D volume. In addition, the amount of voxels involved is also significantly reduced. For the proposed pseudo-3D method, image-based registration is employed, with Sum of Square Difference (SSD) as the similarity measure. The searching engine is Powell's conjugate direction method. In this paper, only rigid transform is used. However, it can be extended to affine transform by adding scaling and possibly shearing to the transform model. We have noticed that more information can be used in the 2D registration if Maximum Intensity Projections (MIP) or Parallel Projections (PP) is used instead of the orthogonal views. Also, other similarity measures, such as covariance or mutual information, can be easily incorporated. The initial evaluation on microCT data shows very promising results. Two application examples are shown: dental samples before and after treatment and structural changes in materials before and after compression. Evaluation on registration accuracy between pseudo-3D method and true 3D method has

  7. 3D wavefront image formation for NIITEK GPR

    Science.gov (United States)

    Soumekh, Mehrdad; Ton, Tuan; Howard, Pete

    2009-05-01

    The U.S. Department of Defense Humanitarian Demining (HD) Research and Development Program focuses on developing, testing, demonstrating, and validating new technology for immediate use in humanitarian demining operations around the globe. Beginning in the late 1990's, the U.S. Army Countermine Division funded the development of the NIITEK ground penetrating radar (GPR) for detection of anti-tank (AT) landmines. This work is concerned with signal processing algorithms to suppress sources of artifacts in the NIITEK GPR, and formation of three-dimensional (3D) imagery from the resultant data. We first show that the NIITEK GPR data correspond to a 3D Synthetic Aperture Radar (SAR) database. An adaptive filtering method is utilized to suppress ground return and self-induced resonance (SIR) signals that are generated by the interaction of the radar-carrying platform and the transmitted radar signal. We examine signal processing methods to improve the fidelity of imagery for this 3D SAR system using pre-processing methods that suppress Doppler aliasing as well as other side lobe leakage artifacts that are introduced by the radar radiation pattern. The algorithm, known as digital spotlighting, imposes a filtering scheme on the azimuth-compressed SAR data, and manipulates the resultant spectral data to achieve a higher PRF to suppress the Doppler aliasing. We also present the 3D version of the Fourier-based wavefront reconstruction, a computationally-efficient and approximation-free SAR imaging method, for image formation with the NIITEK 3D SAR database.

  8. Efficient reconfigurable architectures for 3D medical image compression

    OpenAIRE

    Afandi, Ahmad

    2010-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. Recently, the more widespread use of three-dimensional (3-D) imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound (US) have generated a massive amount of volumetric data. These have provided an impetus to the development of other applications, in particular telemedicine and teleradiology. In thes...

  9. Holoscopic 3D image depth estimation and segmentation techniques

    OpenAIRE

    Alazawi, Eman

    2015-01-01

    This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London Today’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems....

  10. Extracting 3D Layout From a Single Image Using Global Image Structures

    NARCIS (Netherlands)

    Z. Lou; T. Gevers; N. Hu

    2015-01-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very b

  11. An automated 3D reconstruction method of UAV images

    Science.gov (United States)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  12. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    Science.gov (United States)

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Férin, Guillaume; Dufait, Rémi; Jensen, Jørgen Arendt

    2012-03-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32×32 element prototype transducer. The transducer mimicked is a dense matrix phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60° in both the azimuth and elevation direction and 150mm in depth. This results for both techniques in a frame rate of 18 Hz. The implemented synthetic aperture technique reduces the number of transmit channels from 1024 to 256, compared to Explososcan. In terms of FWHM performance, was Explososcan and synthetic aperture found to perform similar. At 90mm depth is Explososcan's FWHM performance 7% better than that of synthetic aperture. Synthetic aperture improved the cystic resolution, which expresses the ability to detect anechoic cysts in a uniform scattering media, at all depths except at Explososcan's focus point. Synthetic aperture reduced the cyst radius, R20dB, at 90mm depth by 48%. Synthetic aperture imaging was shown to reduce the number of transmit channels by four and still, generally, improve the imaging quality.

  13. Deformable Surface 3D Reconstruction from Monocular Images

    CERN Document Server

    Salzmann, Matthieu

    2010-01-01

    Being able to recover the shape of 3D deformable surfaces from a single video stream would make it possible to field reconstruction systems that run on widely available hardware without requiring specialized devices. However, because many different 3D shapes can have virtually the same projection, such monocular shape recovery is inherently ambiguous. In this survey, we will review the two main classes of techniques that have proved most effective so far: The template-based methods that rely on establishing correspondences with a reference image in which the shape is already known, and non-rig

  14. Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery.

    Science.gov (United States)

    Wang, Junchen; Suenaga, Hideyuki; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro; Liao, Hongen

    2014-04-01

    Computer-assisted oral and maxillofacial surgery (OMS) has been rapidly evolving since the last decade. State-of-the-art surgical navigation in OMS still suffers from bulky tracking sensors, troublesome image registration procedures, patient movement, loss of depth perception in visual guidance, and low navigation accuracy. We present an augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery. A customized stereo camera is designed to track both the patient and instrument. Image registration is performed by patient tracking and real-time 3-D contour matching, without requiring any fiducial and reference markers. Real-time autostereoscopic 3-D imaging is implemented with the help of a consumer-level graphics processing unit. The resulting 3-D image of the patient's anatomy is overlaid on the surgical site by a half-silvered mirror using image registration and IP-camera registration to guide the surgeon by exposing hidden critical structures. The 3-D image of the surgical instrument is also overlaid over the real one for an augmented display. The 3-D images present both stereo and motion parallax from which depth perception can be obtained. Experiments were performed to evaluate various aspects of the system; the overall image overlay error of the proposed system was 0.71 mm.

  15. Irrlicht 17 Realtime 3D Engine Beginner's Guide

    CERN Document Server

    Stein, Johannes

    2011-01-01

    A beginner's guide with plenty of screenshots and explained code. If you have C++ skills and are interested in learning Irrlicht, this book is for you. Absolutely no knowledge of Irrlicht is necessary for you to follow this book!

  16. Image quality of a cone beam O-arm 3D imaging system

    Science.gov (United States)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  17. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  18. Optical-CT imaging of complex 3D dose distributions

    Science.gov (United States)

    Oldham, Mark; Kim, Leonard; Hugo, Geoffrey

    2005-04-01

    The limitations of conventional dosimeters restrict the comprehensiveness of verification that can be performed for advanced radiation treatments presenting an immediate and substantial problem for clinics attempting to implement these techniques. In essence, the rapid advances in the technology of radiation delivery have not been paralleled by corresponding advances in the ability to verify these treatments. Optical-CT gel-dosimetry is a relatively new technique with potential to address this imbalance by providing high resolution 3D dose maps in polymer and radiochromic gel dosimeters. We have constructed a 1st generation optical-CT scanner capable of high resolution 3D dosimetry and applied it to a number of simple and increasingly complex dose distributions including intensity-modulated-radiation-therapy (IMRT). Prior to application to IMRT, the robustness of optical-CT gel dosimetry was investigated on geometry and variable attenuation phantoms. Physical techniques and image processing methods were developed to minimize deleterious effects of refraction, reflection, and scattered laser light. Here we present results of investigations into achieving accurate high-resolution 3D dosimetry with optical-CT, and show clinical examples of 3D IMRT dosimetry verification. In conclusion, optical-CT gel dosimetry can provide high resolution 3D dose maps that greatly facilitate comprehensive verification of complex 3D radiation treatments. Good agreement was observed at high dose levels (>50%) between planned and measured dose distributions. Some systematic discrepancies were observed however (rms discrepancy 3% at high dose levels) indicating further work is required to eliminate confounding factors presently compromising the accuracy of optical-CT 3D gel-dosimetry.

  19. 3D acoustic imaging applied to the Baikal neutrino telescope

    International Nuclear Information System (INIS)

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 m square; acoustic pulses were 'linear sweep-spread signals'-multiple-modulated wide-band signals (10→22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with an accuracy of ∼0.2 m (along the beam) and ∼1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km3-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  20. 3D acoustic imaging applied to the Baikal neutrino telescope

    Energy Technology Data Exchange (ETDEWEB)

    Kebkal, K.G. [EvoLogics GmbH, Blumenstrasse 49, 10243 Berlin (Germany)], E-mail: kebkal@evologics.de; Bannasch, R.; Kebkal, O.G. [EvoLogics GmbH, Blumenstrasse 49, 10243 Berlin (Germany); Panfilov, A.I. [Institute for Nuclear Research, 60th October Anniversary pr. 7a, Moscow 117312 (Russian Federation); Wischnewski, R. [DESY, Platanenallee 6, 15735 Zeuthen (Germany)

    2009-04-11

    A hydro-acoustic imaging system was tested in a pilot study on distant localization of elements of the Baikal underwater neutrino telescope. For this innovative approach, based on broad band acoustic echo signals and strictly avoiding any active acoustic elements on the telescope, the imaging system was temporarily installed just below the ice surface, while the telescope stayed in its standard position at 1100 m depth. The system comprised an antenna with four acoustic projectors positioned at the corners of a 50 m square; acoustic pulses were 'linear sweep-spread signals'-multiple-modulated wide-band signals (10{yields}22 kHz) of 51.2 s duration. Three large objects (two string buoys and the central electronics module) were localized by the 3D acoustic imaging, with an accuracy of {approx}0.2 m (along the beam) and {approx}1.0 m (transverse). We discuss signal forms and parameters necessary for improved 3D acoustic imaging of the telescope, and suggest a layout of a possible stationary bottom based 3D imaging setup. The presented technique may be of interest for neutrino telescopes of km{sup 3}-scale and beyond, as a flexible temporary or as a stationary tool to localize basic telescope elements, while these are completely passive.

  1. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    Science.gov (United States)

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.

  2. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    Science.gov (United States)

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement. PMID:27093439

  3. 3D Image Reconstruction from Compton camera data

    CERN Document Server

    Kuchment, Peter

    2016-01-01

    In this paper, we address analytically and numerically the inversion of the integral transform (\\emph{cone} or \\emph{Compton} transform) that maps a function on $\\mathbb{R}^3$ to its integrals over conical surfaces. It arises in a variety of imaging techniques, e.g. in astronomy, optical imaging, and homeland security imaging, especially when the so called Compton cameras are involved. Several inversion formulas are developed and implemented numerically in $3D$ (the much simpler $2D$ case was considered in a previous publication).

  4. Preliminary examples of 3D vector flow imaging

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes; Stuart, Matthias Bo; Tomov, Borislav Gueorguiev;

    2013-01-01

    ultrasound scanner SARUS on a flow rig system with steady flow. The vessel of the flow-rig is centered at a depth of 30 mm, and the flow has an expected 2D circular-symmetric parabolic prole with a peak velocity of 1 m/s. Ten frames of 3D vector flow images are acquired in a cross-sectional plane orthogonal......This paper presents 3D vector flow images obtained using the 3D Transverse Oscillation (TO) method. The method employs a 2D transducer and estimates the three velocity components simultaneously, which is important for visualizing complex flow patterns. Data are acquired using the experimental...... to the velocity magnitude this yields standard deviations of (9.1, 6.4, 0.88) %, respectively. Volumetric flow rates were estimated for all ten frames yielding 57.92.0 mL/s in comparison with 56.2 mL/s measured by a commercial magnetic flow meter. One frame of the obtained 3D vector flow data is presented...

  5. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    Science.gov (United States)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  6. Combining Different Modalities for 3D Imaging of Biological Objects

    CERN Document Server

    Tsyganov, E; Kulkarni, P; Mason, R; Parkey, R; Seliuonine, S; Shay, J; Soesbe, T; Zhezher, V; Zinchenko, A I

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a $^{57}$Co source and $^{98m}$Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown in this paper, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. ...

  7. Linear tracking for 3-D medical ultrasound imaging.

    Science.gov (United States)

    Huang, Qing-Hua; Yang, Zhao; Hu, Wei; Jin, Lian-Wen; Wei, Gang; Li, Xuelong

    2013-12-01

    As the clinical application grows, there is a rapid technical development of 3-D ultrasound imaging. Compared with 2-D ultrasound imaging, 3-D ultrasound imaging can provide improved qualitative and quantitative information for various clinical applications. In this paper, we proposed a novel tracking method for a freehand 3-D ultrasound imaging system with improved portability, reduced degree of freedom, and cost. We designed a sliding track with a linear position sensor attached, and it transmitted positional data via a wireless communication module based on Bluetooth, resulting in a wireless spatial tracking modality. A traditional 2-D ultrasound probe fixed to the position sensor on the sliding track was used to obtain real-time B-scans, and the positions of the B-scans were simultaneously acquired when moving the probe along the track in a freehand manner. In the experiments, the proposed method was applied to ultrasound phantoms and real human tissues. The results demonstrated that the new system outperformed a previously developed freehand system based on a traditional six-degree-of-freedom spatial sensor in phantom and in vivo studies, indicating its merit in clinical applications for human tissues and organs. PMID:23757592

  8. Combining different modalities for 3D imaging of biological objects

    International Nuclear Information System (INIS)

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a 57Co source and 98mTc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. This structural information can provide even more detail if the x-ray tomography is used as presented in the paper

  9. 3D surface topology guides stem cell adhesion and differentiation.

    Science.gov (United States)

    Viswanathan, Priyalakshmi; Ondeck, Matthew G; Chirasatitsin, Somyot; Ngamkham, Kamolchanok; Reilly, Gwendolen C; Engler, Adam J; Battaglia, Giuseppe

    2015-06-01

    Polymerized high internal phase emulsion (polyHIPE) foams are extremely versatile materials for investigating cell-substrate interactions in vitro. Foam morphologies can be controlled by polymerization conditions to result in either open or closed pore structures with different levels of connectivity, consequently enabling the comparison between 2D and 3D matrices using the same substrate with identical surface chemistry conditions. Additionally, here we achieve the control of pore surface topology (i.e. how different ligands are clustered together) using amphiphilic block copolymers as emulsion stabilizers. We demonstrate that adhesion of human mesenchymal progenitor (hES-MP) cells cultured on polyHIPE foams is dependent on foam surface topology and chemistry but is independent of porosity and interconnectivity. We also demonstrate that the interconnectivity, architecture and surface topology of the foams has an effect on the osteogenic differentiation potential of hES-MP cells. Together these data demonstrate that the adhesive heterogeneity of a 3D scaffold could regulate not only mesenchymal stem cell attachment but also cell behavior in the absence of soluble growth factors.

  10. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    Science.gov (United States)

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  11. 3D transrectal ultrasound prostate biopsy using a mechanical imaging and needle-guidance system

    Science.gov (United States)

    Bax, Jeffrey; Cool, Derek; Gardi, Lori; Montreuil, Jacques; Gil, Elena; Bluvol, Jeremy; Knight, Kerry; Smith, David; Romagnoli, Cesare; Fenster, Aaron

    2008-03-01

    Prostate biopsy procedures are generally limited to 2D transrectal ultrasound (TRUS) imaging for biopsy needle guidance. This limitation results in needle position ambiguity and an insufficient record of biopsy core locations in cases of prostate re-biopsy. We have developed a multi-jointed mechanical device that supports a commercially available TRUS probe with an integrated needle guide for precision prostate biopsy. The device is fixed at the base, allowing the joints to be manually manipulated while fully supporting its weight throughout its full range of motion. Means are provided to track the needle trajectory and display this trajectory on a corresponding TRUS image. This allows the physician to aim the needle-guide at predefined targets within the prostate, providing true 3D navigation. The tracker has been designed for use with several end-fired transducers that can be rotated about the longitudinal axis of the probe to generate 3D images. The tracker reduces the variability associated with conventional hand-held probes, while preserving user familiarity and procedural workflow. In a prostate phantom, biopsy needles were guided to within 2 mm of their targets, and the 3D location of the biopsy core was accurate to within 3 mm. The 3D navigation system is validated in the presence of prostate motion in a preliminary patient study.

  12. 3D reconstruction of concave surfaces using polarisation imaging

    Science.gov (United States)

    Sohaib, A.; Farooq, A. R.; Ahmed, J.; Smith, L. N.; Smith, M. L.

    2015-06-01

    This paper presents a novel algorithm for improved shape recovery using polarisation-based photometric stereo. The majority of previous research using photometric stereo involves 3D reconstruction using both the diffuse and specular components of light; however, this paper suggests the use of the specular component only as it is the only form of light that comes directly off the surface without subsurface scattering or interreflections. Experiments were carried out on both real and synthetic surfaces. Real images were obtained using a polarisation-based photometric stereo device while synthetic images were generated using PovRay® software. The results clearly demonstrate that the proposed method can extract three-dimensional (3D) surface information effectively even for concave surfaces with complex texture and surface reflectance.

  13. Vertebral Stenting and Vertebroplasty Guided by an Angiographic 3D Rotational Unit

    Directory of Open Access Journals (Sweden)

    Escobar-de la Garma Víctor Hugo

    2015-01-01

    Full Text Available Introduction. Use of interventional imaging systems in minimally invasion procedures such as kyphoplasty and vertebroplasty gives the advantage of high-resolution images, various zoom levels, different working angles, and intraprocedure image processing such as three-dimensional reconstructions to minimize complication rate. Due to the recent technological improvement of rotational angiographic units (RAU with flat-panel detectors, the useful interventional features of CT have been combined with high-quality fluoroscopy into one single machine. Intraprocedural 3D images offer an alternative way to guide needle insertion and the safe injection of cement to avoid leakages. Case Report. We present the case of a 72-year-old female patient with insidious lumbar pain. Computed tomography revealed a wedge-shaped osteoporotic compression fracture of T10 vertebrae, which was treated successfully with the installation of vertebral stenting system and vertebroplasty with methacrylate guided with a rotational interventional imaging system. Conclusion. Rotational angiographic technology may provide a suitable place for the realization of high-quality minimally invasive spinal procedures, such as kyphoplasty, vertebroplasty, and vertebral stenting. New software programs available nowadays offer the option to make three-dimensional reconstructions with no need of CT scans with the same degree of specificity.

  14. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    Science.gov (United States)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  15. Automated Recognition of 3D Features in GPIR Images

    Science.gov (United States)

    Park, Han; Stough, Timothy; Fijany, Amir

    2007-01-01

    A method of automated recognition of three-dimensional (3D) features in images generated by ground-penetrating imaging radar (GPIR) is undergoing development. GPIR 3D images can be analyzed to detect and identify such subsurface features as pipes and other utility conduits. Until now, much of the analysis of GPIR images has been performed manually by expert operators who must visually identify and track each feature. The present method is intended to satisfy a need for more efficient and accurate analysis by means of algorithms that can automatically identify and track subsurface features, with minimal supervision by human operators. In this method, data from multiple sources (for example, data on different features extracted by different algorithms) are fused together for identifying subsurface objects. The algorithms of this method can be classified in several different ways. In one classification, the algorithms fall into three classes: (1) image-processing algorithms, (2) feature- extraction algorithms, and (3) a multiaxis data-fusion/pattern-recognition algorithm that includes a combination of machine-learning, pattern-recognition, and object-linking algorithms. The image-processing class includes preprocessing algorithms for reducing noise and enhancing target features for pattern recognition. The feature-extraction algorithms operate on preprocessed data to extract such specific features in images as two-dimensional (2D) slices of a pipe. Then the multiaxis data-fusion/ pattern-recognition algorithm identifies, classifies, and reconstructs 3D objects from the extracted features. In this process, multiple 2D features extracted by use of different algorithms and representing views along different directions are used to identify and reconstruct 3D objects. In object linking, which is an essential part of this process, features identified in successive 2D slices and located within a threshold radius of identical features in adjacent slices are linked in a

  16. Dynamic 3D computed tomography scanner for vascular imaging

    Science.gov (United States)

    Lee, Mark K.; Holdsworth, David W.; Fenster, Aaron

    2000-04-01

    A 3D dynamic computed-tomography (CT) scanner was developed for imaging objects undergoing periodic motion. The scanner system has high spatial and sufficient temporal resolution to produce quantitative tomographic/volume images of objects such as excised arterial samples perfused under physiological pressure conditions and enables the measurements of the local dynamic elastic modulus (Edyn) of the arteries in the axial and longitudinal directions. The system was comprised of a high resolution modified x-ray image intensifier (XRII) based computed tomographic system and a computer-controlled cardiac flow simulator. A standard NTSC CCD camera with a macro lens was coupled to the electro-optically zoomed XRII to acquire dynamic volumetric images. Through prospective cardiac gating and computer synchronized control, a time-resolved sequence of 20 mm thick high resolution volume images of porcine aortic specimens during one simulated cardiac cycle were obtained. Performance evaluation of the scanners illustrated that tomographic images can be obtained with resolution as high as 3.2 mm-1 with only a 9% decrease in the resolution for objects moving at velocities of 1 cm/s in 2D mode and static spatial resolution of 3.55 mm-1 with only a 14% decrease in the resolution in 3D mode for objects moving at a velocity of 10 cm/s. Application of the system for imaging of intact excised arterial specimens under simulated physiological flow/pressure conditions enabled measurements of the Edyn of the arteries with a precision of +/- kPa for the 3D scanner. Evaluation of the Edyn in the axial and longitudinal direction produced values of 428 +/- 35 kPa and 728 +/- 71 kPa, demonstrating the isotropic and homogeneous viscoelastic nature of the vascular specimens. These values obtained from the Dynamic CT systems were not statistically different (p less than 0.05) from the values obtained by standard uniaxial tensile testing and volumetric measurements.

  17. 3D VSP imaging in the Deepwater GOM

    Science.gov (United States)

    Hornby, B. E.

    2005-05-01

    Seismic imaging challenges in the Deepwater GOM include surface and sediment related multiples and issues arising from complicated salt bodies. Frequently, wells encounter geologic complexity not resolved on conventional surface seismic section. To help address these challenges BP has been acquiring 3D VSP (Vertical Seismic Profile) surveys in the Deepwater GOM. The procedure involves placing an array of seismic sensors in the borehole and acquiring a 3D seismic dataset with a surface seismic gunboat that fires airguns in a spiral pattern around the wellbore. Placing the seismic geophones in the borehole provides a higher resolution and more accurate image near the borehole, as well as other advantages relating to the unique position of the sensors relative to complex structures. Technical objectives are to complement surface seismic with improved resolution (~2X seismic), better high dip structure definition (e.g. salt flanks) and to fill in "imaging holes" in complex sub-salt plays where surface seismic is blind. Business drivers for this effort are to reduce risk in well placement, improved reserve calculation and understanding compartmentalization and stratigraphic variation. To date, BP has acquired 3D VSP surveys in ten wells in the DW GOM. The initial results are encouraging and show both improved resolution and structural images in complex sub-salt plays where the surface seismic is blind. In conjunction with this effort BP has influenced both contractor borehole seismic tool design and developed methods to enable the 3D VSP surveys to be conducted offline thereby avoiding the high daily rig costs associated with a Deepwater drilling rig.

  18. Improvements in quality and quantification of 3D PET images

    OpenAIRE

    Rapisarda,

    2012-01-01

    The spatial resolution of Positron Emission Tomography is conditioned by several physical factors, which can be taken into account by using a global Point Spread Function (PSF). In this thesis a spatially variant (radially asymmetric) PSF implementation in the image space of a 3D Ordered Subsets Expectation Maximization (OSEM) algorithm is proposed. Two different scanners were considered, without and with Time Of Flight (TOF) capability. The PSF was derived by fitting some experimental...

  19. Utilization of multiple frequencies in 3D nonlinear microwave imaging

    DEFF Research Database (Denmark)

    Jensen, Peter Damsgaard; Rubæk, Tonny; Mohr, Johan Jacob

    2012-01-01

    The use of multiple frequencies in a nonlinear microwave algorithm is considered. Using multiple frequencies allows for obtaining the improved resolution available at the higher frequencies while retaining the regularizing effects of the lower frequencies. However, a number of different challenges...... at lower frequencies are used as starting guesses for reconstructions at higher frequencies. The performance is illustrated using simulated 2-D data and data obtained with the 3-D DTU microwave imaging system....

  20. 3D printed guides for controlled alignment in biomechanics tests.

    Science.gov (United States)

    Verstraete, Matthias A; Willemot, Laurent; Van Onsem, Stefaan; Stevens, Cyriëlle; Arnout, Nele; Victor, Jan

    2016-02-01

    The bone-machine interface is a vital first step for biomechanical testing. It remains challenging to restore the original alignment of the specimen with respect to the test setup. To overcome this issue, we developed a methodology based on virtual planning and 3D printing. In this paper, the methodology is outlined and a proof of concept is presented based on a series of cadaveric tests performed on our knee simulator. The tests described in this paper reached an accuracy within 3-4° and 3-4mm with respect to the virtual planning. It is however the authors' belief that the method has the potential to achieve an accuracy within one degree and one millimeter. Therefore, this approach can aid in reducing the imprecisions in biomechanical tests (e.g. knee simulator tests for evaluating knee kinematics) and improve the consistency of the bone-machine interface. PMID:26810696

  1. 3D reconstruction of multiple stained histology images

    Directory of Open Access Journals (Sweden)

    Yi Song

    2013-01-01

    Full Text Available Context: Three dimensional (3D tissue reconstructions from the histology images with different stains allows the spatial alignment of structural and functional elements highlighted by different stains for quantitative study of many physiological and pathological phenomena. This has significant potential to improve the understanding of the growth patterns and the spatial arrangement of diseased cells, and enhance the study of biomechanical behavior of the tissue structures towards better treatments (e.g. tissue-engineering applications. Methods: This paper evaluates three strategies for 3D reconstruction from sets of two dimensional (2D histological sections with different stains, by combining methods of 2D multi-stain registration and 3D volumetric reconstruction from same stain sections. Setting and Design: The different strategies have been evaluated on two liver specimens (80 sections in total stained with Hematoxylin and Eosin (H and E, Sirius Red, and Cytokeratin (CK 7. Results and Conclusion: A strategy of using multi-stain registration to align images of a second stain to a volume reconstructed by same-stain registration results in the lowest overall error, although an interlaced image registration approach may be more robust to poor section quality.

  2. Discrete Method of Images for 3D Radio Propagation Modeling

    Science.gov (United States)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  3. 3D stereotaxis for epileptic foci through integrating MR imaging with neurological electrophysiology data

    International Nuclear Information System (INIS)

    Objective: To improve the accuracy of the epilepsy diagnoses by integrating MR image from PACS with data from neurological electrophysiology. The integration is also very important for transmiting diagnostic information to 3D TPS of radiotherapy. Methods: The electroencephalogram was redisplayed by EEG workstation, while MR image was reconstructed by Brainvoyager software. 3D model of patient brain was built up by combining reconstructed images with electroencephalogram data in Base 2000. 30 epileptic patients (18 males and 12 females) with their age ranged from 12 to 54 years were confirmed by using the integrated MR images and the data from neurological electrophysiology and their 3D stereolocating. Results: The corresponding data in 3D model could show the real situation of patients' brain and visually locate the precise position of the focus. The suddessful rate of 3D guided operation was greatly improved, and the number of epileptic onset was markedly decreased. The epilepsy was stopped for 6 months in 8 of the 30 patients. Conclusion: The integration of MR image and information of neurological electrophysiology can improve the diagnostic level for epilepsy, and it is crucial for imp roving the successful rate of manipulations and the epilepsy analysis. (authors)

  4. 3D thermal medical image visualization tool: Integration between MRI and thermographic images.

    Science.gov (United States)

    Abreu de Souza, Mauren; Chagas Paz, André Augusto; Sanches, Ionildo Jóse; Nohama, Percy; Gamba, Humberto Remigio

    2014-01-01

    Three-dimensional medical image reconstruction using different images modalities require registration techniques that are, in general, based on the stacking of 2D MRI/CT images slices. In this way, the integration of two different imaging modalities: anatomical (MRI/CT) and physiological information (infrared image), to generate a 3D thermal model, is a new methodology still under development. This paper presents a 3D THERMO interface that provides flexibility for the 3D visualization: it incorporates the DICOM parameters; different color scale palettes at the final 3D model; 3D visualization at different planes of sections; and a filtering option that provides better image visualization. To summarize, the 3D thermographc medical image visualization provides a realistic and precise medical tool. The merging of two different imaging modalities allows better quality and more fidelity, especially for medical applications in which the temperature changes are clinically significant.

  5. Investigation of the feasability for 3D synthetic aperture imaging

    DEFF Research Database (Denmark)

    Nikolov, Svetoslav; Jensen, Jørgen Arendt

    2003-01-01

    This paper investigates the feasibility of implementing real-time synthetic aperture 3D imaging on the experimental system developed at the Center for Fast Ultrasound Imaging using a 2D transducer array. The target array is a fully populated 32 × 32 3 MHz array with a half wavelength pitch....... The elements of the array are grouped in blocks of 16 × 8, which can simultaneously be accessed by the 128 channels of the scanner. Using 8-to-1 high-voltage analog multiplexors, any group of 16 × 8 elements can be accessed. Simulations are done using Field II using parameters from a 32 x 32 elements...

  6. 3D object-oriented image analysis in 3D geophysical modelling

    DEFF Research Database (Denmark)

    Fadel, I.; van der Meijde, M.; Kerle, N.;

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...... interactive modelling environment IGMAS+, and their density contrast values were calculated using an object-based inversion technique to calculate the forward signal of the objects and compare it with the measured satellite gravity. Thus, a new object-based approach was implemented to interpret and extract...

  7. Automatic structural matching of 3D image data

    Science.gov (United States)

    Ponomarev, Svjatoslav; Lutsiv, Vadim; Malyshev, Igor

    2015-10-01

    A new image matching technique is described. It is implemented as an object-independent hierarchical structural juxtaposition algorithm based on an alphabet of simple object-independent contour structural elements. The structural matching applied implements an optimized method of walking through a truncated tree of all possible juxtapositions of two sets of structural elements. The algorithm was initially developed for dealing with 2D images such as the aerospace photographs, and it turned out to be sufficiently robust and reliable for matching successfully the pictures of natural landscapes taken in differing seasons from differing aspect angles by differing sensors (the visible optical, IR, and SAR pictures, as well as the depth maps and geographical vector-type maps). At present (in the reported version), the algorithm is enhanced based on additional use of information on third spatial coordinates of observed points of object surfaces. Thus, it is now capable of matching the images of 3D scenes in the tasks of automatic navigation of extremely low flying unmanned vehicles or autonomous terrestrial robots. The basic principles of 3D structural description and matching of images are described, and the examples of image matching are presented.

  8. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    Science.gov (United States)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  9. Towards magnetic 3D x-ray imaging

    Science.gov (United States)

    Fischer, Peter; Streubel, R.; Im, M.-Y.; Parkinson, D.; Hong, J.-I.; Schmidt, O. G.; Makarov, D.

    2014-03-01

    Mesoscale phenomena in magnetism will add essential parameters to improve speed, size and energy efficiency of spin driven devices. Multidimensional visualization techniques will be crucial to achieve mesoscience goals. Magnetic tomography is of large interest to understand e.g. interfaces in magnetic multilayers, the inner structure of magnetic nanocrystals, nanowires or the functionality of artificial 3D magnetic nanostructures. We have developed tomographic capabilities with magnetic full-field soft X-ray microscopy combining X-MCD as element specific magnetic contrast mechanism, high spatial and temporal resolution due to the Fresnel zone plate optics. At beamline 6.1.2 at the ALS (Berkeley CA) a new rotation stage allows recording an angular series (up to 360 deg) of high precision 2D projection images. Applying state-of-the-art reconstruction algorithms it is possible to retrieve the full 3D structure. We will present results on prototypic rolled-up Ni and Co/Pt tubes and glass capillaries coated with magnetic films and compare to other 3D imaging approaches e.g. in electron microscopy. Supported by BES MSD DOE Contract No. DE-AC02-05-CH11231 and ERC under the EU FP7 program (grant agreement No. 306277).

  10. Preliminary Investigation: 2D-3D Registration of MR and X-ray Cardiac Images Using Catheter Constraints

    OpenAIRE

    Truong, Michael V.N.; Aslam, Abdullah; Rinaldi, Christopher Aldo; Razavi, Reza; Penney, Graeme P.; Rhode, Kawal

    2009-01-01

    Cardiac catheterization procedures are routinely guided by X-ray fluoroscopy but suffer from poor soft-tissue contrast and a lack of depth information. These procedures often employ pre-operative magnetic resonance or computed tomography imaging for treatment planning due to their excellent soft-tissue contrast and 3D imaging capabilities. We developed a 2D-3D image registration method to consolidate the advantages of both modalities by overlaying the 3D images onto the X-ray. Our method uses...

  11. Large Scale 3D Image Reconstruction in Optical Interferometry

    CERN Document Server

    Schutz, Antony; Mary, David; Thiébaut, Eric; Soulez, Ferréol

    2015-01-01

    Astronomical optical interferometers (OI) sample the Fourier transform of the intensity distribution of a source at the observation wavelength. Because of rapid atmospheric perturbations, the phases of the complex Fourier samples (visibilities) cannot be directly exploited , and instead linear relationships between the phases are used (phase closures and differential phases). Consequently, specific image reconstruction methods have been devised in the last few decades. Modern polychromatic OI instruments are now paving the way to multiwavelength imaging. This paper presents the derivation of a spatio-spectral ("3D") image reconstruction algorithm called PAINTER (Polychromatic opticAl INTErferometric Reconstruction software). The algorithm is able to solve large scale problems. It relies on an iterative process, which alternates estimation of polychromatic images and of complex visibilities. The complex visibilities are not only estimated from squared moduli and closure phases, but also from differential phase...

  12. Autonomous Planetary 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    A common task for many deep space missions is autonomous generation of 3-D representations of planetary surfaces onboard unmanned spacecrafts. The basic problem for this class of missions is, that the closed loop time is far too long. The closed loop time is defined as the time from when a human...... of seconds to a few minutes, the closed loop time effectively precludes active human control.The only way to circumvent this problem is to build an artificial feature extractor operating autonomously onboard the spacecraft.Different artificial feature extractors are presented and their efficiency...... is discussed.Based on such features, 3-D representations may be compiled from two or more 2-D satellite images. The main purposes of such a mapping system are extraction of landing sites, objects of scientific interest and general planetary surveying. All data processing is performed autonomously onboard...

  13. Thoracic Pedicle Screw Placement Guide Plate Produced by Three-Dimensional (3-D) Laser Printing.

    Science.gov (United States)

    Chen, Hongliang; Guo, Kaijing; Yang, Huilin; Wu, Dongying; Yuan, Feng

    2016-01-01

    BACKGROUND The aim of this study was to evaluate the accuracy and feasibility of an individualized thoracic pedicle screw placement guide plate produced by 3-D laser printing. MATERIAL AND METHODS Thoracic pedicle samples of 3 adult cadavers were randomly assigned for 3-D CT scans. The 3-D thoracic models were established by using medical Mimics software, and a screw path was designed with scanned data. Then the individualized thoracic pedicle screw placement guide plate models, matched to the backside of thoracic vertebral plates, were produced with a 3-D laser printer. Screws were placed with assistance of a guide plate. Then, the placement was assessed. RESULTS With the data provided by CT scans, 27 individualized guide plates were produced by 3-D printing. There was no significant difference in sex and relevant parameters of left and right sides among individuals (P>0.05). Screws were placed with assistance of guide plates, and all screws were in the correct positions without penetration of pedicles, under direct observation and anatomic evaluation post-operatively. CONCLUSIONS A thoracic pedicle screw placement guide plate can be produced by 3-D printing. With a high accuracy in placement and convenient operation, it provides a new method for accurate placement of thoracic pedicle screws. PMID:27194139

  14. UNDERWATER 3D MODELING: IMAGE ENHANCEMENT AND POINT CLOUD FILTERING

    Directory of Open Access Journals (Sweden)

    I. Sarakinou

    2016-06-01

    Full Text Available This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images’ radiometry (captured at shallow depths and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software. Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck captured at three different depths (3.5m, 10m and 14m respectively. Four models have been created from the first dataset (seafloor in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a the definition of parameters for the point cloud filtering and the creation of a reference model, b the radiometric editing of images, followed by the creation of three improved models and c the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m and different objects (part of a wreck and a small boat's wreck in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  15. Ice shelf melt rates and 3D imaging

    Science.gov (United States)

    Lewis, Cameron Scott

    Ice shelves are sensitive indicators of climate change and play a critical role in the stability of ice sheets and oceanic currents. Basal melting of ice shelves plays an important role in both the mass balance of the ice sheet and the global climate system. Airborne- and satellite based remote sensing systems can perform thickness measurements of ice shelves. Time separated repeat flight tracks over ice shelves of interest generate data sets that can be used to derive basal melt rates using traditional glaciological techniques. Many previous melt rate studies have relied on surface elevation data gathered by airborne- and satellite based altimeters. These systems infer melt rates by assuming hydrostatic equilibrium, an assumption that may not be accurate, especially near an ice shelf's grounding line. Moderate bandwidth, VHF, ice penetrating radar has been used to measure ice shelf profiles with relatively coarse resolution. This study presents the application of an ultra wide bandwidth (UWB), UHF, ice penetrating radar to obtain finer resolution data on the ice shelves. These data reveal significant details about the basal interface, including the locations and depth of bottom crevasses and deviations from hydrostatic equilibrium. While our single channel radar provides new insight into ice shelf structure, it only images a small swatch of the shelf, which is assumed to be an average of the total shelf behavior. This study takes an additional step by investigating the application of a 3D imaging technique to a data set collected using a ground based multi channel version of the UWB radar. The intent is to show that the UWB radar could be capable of providing a wider swath 3D image of an ice shelf. The 3D images can then be used to obtain a more complete estimate of the bottom melt rates of ice shelves.

  16. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    Science.gov (United States)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a

  17. 3D IMAGING OF INDIVIDUAL PARTICLES: A REVIEW

    Directory of Open Access Journals (Sweden)

    Eric Pirard

    2012-06-01

    Full Text Available In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear distinction between the surfometric and volumetric principles. The literature review is as broad as possible covering materials science as well as biology while keeping an eye on emerging technologies in optics and physics. The paper should be of interest to any scientist trying to picture particles in 3D with the best possible resolution for accurate size and shape estimation. Though techniques are adequate for nanoscopic and microscopic particles, no special size limit has been considered while compiling the review.

  18. Development of 3D microwave imaging reflectometry in LHD (invited).

    Science.gov (United States)

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO.

  19. Development of 3D microwave imaging reflectometry in LHD (invited).

    Science.gov (United States)

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO. PMID:23126965

  20. Image-Based 3D Face Modeling System

    Directory of Open Access Journals (Sweden)

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  1. Ultra-realistic 3-D imaging based on colour holography

    Science.gov (United States)

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  2. 3D imaging of neutron tracks using confocal microscopy

    Science.gov (United States)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  3. Extracting 3D layout from a single image using global image structures.

    Science.gov (United States)

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation.

  4. Recent progress in 3-D imaging of sea freight containers

    Science.gov (United States)

    Fuchs, Theobald; Schön, Tobias; Dittmann, Jonas; Sukowski, Frank; Hanke, Randolf

    2015-03-01

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today's 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  5. Recent progress in 3-D imaging of sea freight containers

    Energy Technology Data Exchange (ETDEWEB)

    Fuchs, Theobald, E-mail: theobold.fuchs@iis.fraunhofer.de; Schön, Tobias, E-mail: theobold.fuchs@iis.fraunhofer.de; Sukowski, Frank [Fraunhofer Development Center X-ray Technology EZRT, Flugplatzstr. 75, 90768 Fürth (Germany); Dittmann, Jonas; Hanke, Randolf [Chair of X-ray Microscopy, Institute of Physics and Astronomy, Julius-Maximilian-University Würzburg, Josef-Martin-Weg 63, 97074 Würzburg (Germany)

    2015-03-31

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections.

  6. 3D Reconstruction of virtual colon structures from colonoscopy images.

    Science.gov (United States)

    Hong, DongHo; Tavanapong, Wallapak; Wong, Johnny; Oh, JungHwan; de Groen, Piet C

    2014-01-01

    This paper presents the first fully automated reconstruction technique of 3D virtual colon segments from individual colonoscopy images. It is the basis of new software applications that may offer great benefits for improving quality of care for colonoscopy patients. For example, a 3D map of the areas inspected and uninspected during colonoscopy can be shown on request of the endoscopist during the procedure. The endoscopist may revisit the suggested uninspected areas to reduce the chance of missing polyps that reside in these areas. The percentage of the colon surface seen by the endoscopist can be used as a coarse objective indicator of the quality of the procedure. The derived virtual colon models can be stored for post-procedure training of new endoscopists to teach navigation techniques that result in a higher level of procedure quality. Our technique does not require a prior CT scan of the colon or any global positioning device. Our experiments on endoscopy images of an Olympus synthetic colon model reveal encouraging results with small average reconstruction errors (4.1 mm for the fold depths and 12.1 mm for the fold circumferences). PMID:24225230

  7. 3D electrical tomographic imaging using vertical arrays of electrodes

    Science.gov (United States)

    Murphy, S. C.; Stanley, S. J.; Rhodes, D.; York, T. A.

    2006-11-01

    Linear arrays of electrodes in conjunction with electrical impedance tomography have been used to spatially interrogate industrial processes that have only limited access for sensor placement. This paper explores the compromises that are to be expected when using a small number of vertically positioned linear arrays to facilitate 3D imaging using electrical tomography. A configuration with three arrays is found to give reasonable results when compared with a 'conventional' arrangement of circumferential electrodes. A single array yields highly localized sensitivity that struggles to image the whole space. Strategies have been tested on a small-scale version of a sludge settling application that is of relevance to the industrial sponsor. A new electrode excitation strategy, referred to here as 'planar cross drive', is found to give superior results to an extended version of the adjacent electrodes technique due to the improved uniformity of the sensitivity across the domain. Recommendations are suggested for parameters to inform the scale-up to industrial vessels.

  8. High Resolution 3D Radar Imaging of Comet Interiors

    Science.gov (United States)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D

  9. Integral Imaging Based 3-D Image Encryption Algorithm Combined with Cellular Automata

    OpenAIRE

    Li, X. W.; Kim, D. H.; Cho, S. J.; Kim, S. T.

    2013-01-01

    A novel optical encryption method is proposed in this paper to achieve 3-D image encryption. This proposed encryption algorithm combines the use of computational integral imaging (CII) and linear-complemented maximum- length cellular automata (LC-MLCA) to encrypt a 3D image. In the encryption process, the 2-D elemental image array (EIA) recorded by light rays of the 3-D image are mapped inversely through the lenslet array according the ray tracing theory. Next, the 2-D EIA is encrypted by LC-...

  10. Fast 3-d tomographic microwave imaging for breast cancer detection.

    Science.gov (United States)

    Grzegorczyk, Tomasz M; Meaney, Paul M; Kaufman, Peter A; diFlorio-Alexander, Roberta M; Paulsen, Keith D

    2012-08-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to measure signals down to levels compatible with sub-centimeter image resolution while keeping an exam time under 2 min. Second, the software overcomes the enormous time burden and produces similarly accurate images in less than 20 min. The combination of the new hardware and software allows us to produce and report here the first clinical 3-D microwave tomographic images of the breast. Two clinical examples are selected out of 400+ exams conducted at the Dartmouth Hitchcock Medical Center (Lebanon, NH). The first example demonstrates the potential usefulness of our system for breast cancer screening while the second example focuses on therapy monitoring. PMID:22562726

  11. Brain surface maps from 3-D medical images

    Science.gov (United States)

    Lu, Jiuhuai; Hansen, Eric W.; Gazzaniga, Michael S.

    1991-06-01

    The anatomic and functional localization of brain lesions for neurologic diagnosis and brain surgery is facilitated by labeling the cortical surface in 3D images. This paper presents a method which extracts cortical contours from magnetic resonance (MR) image series and then produces a planar surface map which preserves important anatomic features. The resultant map may be used for manual anatomic localization as well as for further automatic labeling. Outer contours are determined on MR cross-sectional images by following the clear boundaries between gray matter and cerebral-spinal fluid, skipping over sulci. Carrying this contour below the surface by shrinking it along its normal produces an inner contour that alternately intercepts gray matter (sulci) and white matter along its length. This procedure is applied to every section in the set, and the image (grayscale) values along the inner contours are radially projected and interpolated onto a semi-cylindrical surface with axis normal to the slices and large enough to cover the whole brain. A planar map of the cortical surface results by flattening this cylindrical surface. The projection from inner contour to cylindrical surface is unique in the sense that different points on the inner contour correspond to different points on the cylindrical surface. As the outer contours are readily obtained by automatic segmentation, cortical maps can be made directly from an MR series.

  12. Fast 3D subsurface imaging with stepped-frequency GPR

    Science.gov (United States)

    Masarik, Matthew P.; Burns, Joseph; Thelen, Brian T.; Sutter, Lena

    2015-05-01

    This paper investigates an algorithm for forming 3D images of the subsurface using stepped-frequency GPR data. The algorithm is specifically designed for a handheld GPR and therefore accounts for the irregular sampling pattern in the data and the spatially-variant air-ground interface by estimating an effective "ground-plane" and then registering the data to the plane. The algorithm efficiently solves the 4th-order polynomial for the Snell reflection points using a fully vectorized iterative scheme. The forward operator is implemented efficiently using an accelerated nonuniform FFT (Greengard and Lee, 2004); the adjoint operator is implemented efficiently using an interpolation step coupled with an upsampled FFT. The imaging is done as a linearized version of the full inverse problem, which is regularized using a sparsity constraint to reduce sidelobes and therefore improve image localization. Applying an appropriate sparsity constraint, the algorithm is able to eliminate most the surrounding clutter and sidelobes, while still rendering valuable image properties such as shape and size. The algorithm is applied to simulated data, controlled experimental data (made available by Dr. Waymond Scott, Georgia Institute of Technology), and government-provided data with irregular sampling and air-ground interface.

  13. Research of Fast 3D Imaging Based on Multiple Mode

    Science.gov (United States)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  14. Myocardial strains from 3D displacement encoded magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Kindberg Katarina

    2012-04-01

    Full Text Available Abstract Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE, make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts.

  15. 3D imaging of semiconductor components by discrete laminography

    Energy Technology Data Exchange (ETDEWEB)

    Batenburg, K. J. [Centrum Wiskunde and Informatica, P.O. Box 94079, NL-1090 GB Amsterdam, The Netherlands and iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, W. J.; Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  16. 3-D MR imaging of ectopia vasa deferentia

    Energy Technology Data Exchange (ETDEWEB)

    Goenka, Ajit Harishkumar; Parihar, Mohan; Sharma, Raju; Gupta, Arun Kumar [All India Institute of Medical Sciences (AIIMS), Department of Radiology, New Delhi (India); Bhatnagar, Veereshwar [All India Institute of Medical Sciences (AIIMS), Department of Paediatric Surgery, New Delhi (India)

    2009-11-15

    Ectopia vasa deferentia is a complex anomaly characterized by abnormal termination of the urethral end of the vas deferens into the urinary tract due to an incompletely understood developmental error of the distal Wolffian duct. Associated anomalies of the lower gastrointestinal tract and upper urinary tract are also commonly present due to closely related embryological development. Although around 32 cases have been reported in the literature, the MR appearance of this condition has not been previously described. We report a child with high anorectal malformation who was found to have ectopia vasa deferentia, crossed fused renal ectopia and type II caudal regression syndrome on MR examination. In addition to the salient features of this entity on reconstructed MR images, the important role of 3-D MRI in establishing an unequivocal diagnosis and its potential in facilitating individually tailored management is also highlighted. (orig.)

  17. An Efficient 3D Imaging using Structured Light Systems

    Science.gov (United States)

    Lee, Deokwoo

    Structured light 3D surface imaging has been crucial in the fields of image processing and computer vision, particularly in reconstruction, recognition and others. In this dissertation, we propose the approaches to development of an efficient 3D surface imaging system using structured light patterns including reconstruction, recognition and sampling criterion. To achieve an efficient reconstruction system, we address the problem in its many dimensions. In the first, we extract geometric 3D coordinates of an object which is illuminated by a set of concentric circular patterns and reflected to a 2D image plane. The relationship between the original and the deformed shape of the light patterns due to a surface shape provides sufficient 3D coordinates information. In the second, we consider system efficiency. The efficiency, which can be quantified by the size of data, is improved by reducing the number of circular patterns to be projected onto an object of interest. Akin to the Shannon-Nyquist Sampling Theorem, we derive the minimum number of circular patterns which sufficiently represents the target object with no considerable information loss. Specific geometric information (e.g. the highest curvature) of an object is key to deriving the minimum sampling density. In the third, the object, represented using the minimum number of patterns, has incomplete color information (i.e. color information is given a priori along with the curves). An interpolation is carried out to complete the photometric reconstruction. The results can be approximately reconstructed because the minimum number of the patterns may not exactly reconstruct the original object. But the result does not show considerable information loss, and the performance of an approximate reconstruction is evaluated by performing recognition or classification. In an object recognition, we use facial curves which are deformed circular curves (patterns) on a target object. We simply carry out comparison between the

  18. CBCT-based 3D MRA and angiographic image fusion and MRA image navigation for neuro interventions.

    Science.gov (United States)

    Zhang, Qiang; Zhang, Zhiqiang; Yang, Jiakang; Sun, Qi; Luo, Yongchun; Shan, Tonghui; Zhang, Hao; Han, Jingfeng; Liang, Chunyang; Pan, Wenlong; Gu, Chuanqi; Mao, Gengsheng; Xu, Ruxiang

    2016-08-01

    Digital subtracted angiography (DSA) remains the gold standard for diagnosis of cerebral vascular diseases and provides intraprocedural guidance. This practice involves extensive usage of x-ray and iodinated contrast medium, which can induce side effects. In this study, we examined the accuracy of 3-dimensional (3D) registration of magnetic resonance angiography (MRA) and DSA imaging for cerebral vessels, and tested the feasibility of using preprocedural MRA for real-time guidance during endovascular procedures.Twenty-three patients with suspected intracranial arterial lesions were enrolled. The contrast medium-enhanced 3D DSA of target vessels were acquired in 19 patients during endovascular procedures, and the images were registered with preprocedural MRA for fusion accuracy evaluation. Low-dose noncontrasted 3D angiography of the skull was performed in the other 4 patients, and registered with the MRA. The MRA was overlaid afterwards with 2D live fluoroscopy to guide endovascular procedures.The 3D registration of the MRA and angiography demonstrated a high accuracy for vessel lesion visualization in all 19 patients examined. Moreover, MRA of the intracranial vessels, registered to the noncontrasted 3D angiography in the 4 patients, provided real-time 3D roadmap to successfully guide the endovascular procedures. Radiation dose to patients and contrast medium usage were shown to be significantly reduced.Three-dimensional MRA and angiography fusion can accurately generate cerebral vasculature images to guide endovascular procedures. The use of the fusion technology could enhance clinical workflow while minimizing contrast medium usage and radiation dose, and hence lowering procedure risks and increasing treatment safety. PMID:27512846

  19. A beginner's guide to 3D printing 14 simple toy designs to get you started

    CERN Document Server

    Rigsby, Mike

    2014-01-01

    A Beginner''s Guide to 3D Printing is the perfect resource for those who would like to experiment with 3D design and manufacturing, but have little or no technical experience with the standard software. Author Mike Rigsby leads readers step-by-step through 15 simple toy projects, each illustrated with screen caps of Autodesk 123D Design, the most common free 3D software available. The projects are later described using Sketchup, another free popular software package. Beginning with basics projects that will take longer to print than design, readers are then given instruction on more advanced t

  20. The European Society of Therapeutic Radiology and Oncology-European Institute of Radiotherapy (ESTRO-EIR) report on 3D CT-based in-room image guidance systems: a practical and technical review and guide

    DEFF Research Database (Denmark)

    Korreman, Stine; Rasch, Coen; McNair, Helen;

    2010-01-01

    of practical relevance for radiation oncology. This report focuses primarily on 3D CT-based in-room image guidance (3DCT-IGRT) systems. It will provide an overview and current standing of 3DCT-IGRT systems addressing the rationale, objectives, principles, applications, and process pathways, both clinical...... and technical for treatment delivery and quality assurance. These are reviewed for four categories of solutions; kV CT and kV CBCT (cone-beam CT) as well as MV CT and MV CBCT. It will also provide a framework and checklist to consider the capability and functionality of these systems as well as the resources...... needed for implementation. Two different but typical clinical cases (tonsillar and prostate cancer) using 3DCT-IGRT are illustrated with workflow processes via feedback questionnaires from several large clinical centres currently utilizing these systems. The feedback from these clinical centres...

  1. Spectral ladar: towards active 3D multispectral imaging

    Science.gov (United States)

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  2. GPU-accelerated denoising of 3D magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  3. Integral Imaging Based 3-D Image Encryption Algorithm Combined with Cellular Automata

    Directory of Open Access Journals (Sweden)

    X. W. Li

    2013-08-01

    Full Text Available A novel optical encryption method is proposed in this paper to achieve 3-D image encryption. This proposed encryption algorithm combines the use of computational integral imaging (CII and linear-complemented maximum- length cellular automata (LC-MLCA to encrypt a 3D image. In the encryption process, the 2-D elemental image array (EIA recorded by light rays of the 3-D image are mapped inversely through the lenslet array according the ray tracing theory. Next, the 2-D EIA is encrypted by LC-MLCA algorithm. When decrypting the encrypted image, the 2-D EIA is recovered by the LC-MLCA. Using the computational integral imaging reconstruction (CIIR technique and a 3-D object is subsequently reconstructed on the output plane from the 2-D recovered EIA. Because the 2-D EIA is composed of a number of elemental images having their own perspectives of a 3-D image, even if the encrypted image is seriously harmed, the 3-D image can be successfully reconstructed only with partial data. To verify the usefulness of the proposed algorithm, we perform computational experiments and present the experimental results for various attacks. The experiments demonstrate that the proposed encryption method is valid and exhibits strong robustness and security.

  4. Improved understanding of brain morphology through 3D printing: A brief guide

    OpenAIRE

    Madan, Christopher

    2016-01-01

    Brain morphology can provide insights into inter-individual differences. In the present guide, we outline the steps for generating a print-ready 3D model of brain structures from a standard T1-weighted structural MRI volume. By improving our understanding of brain morphology, we hope to enhance teaching and scientific communication, as well as aid in the development of novel measures of brain morphology. The present guide details the steps for generating a print-ready 3D model of brain ...

  5. Interventional spinal procedures guided and controlled by a 3D rotational angiographic unit

    Energy Technology Data Exchange (ETDEWEB)

    Pedicelli, Alessandro; Verdolotti, Tommaso; Desiderio, Flora; D' Argento, Francesco; Colosimo, Cesare; Bonomo, Lorenzo [Catholic University of Rome, A. Gemelli Hospital, Department of Bioimaging and Radiological Sciences, Rome (Italy); Pompucci, Angelo [Catholic University of Rome, A. Gemelli Hospital, Department of Neurotraumatology, Rome (Italy)

    2011-12-15

    The aim of this paper is to demonstrate the usefulness of 2D multiplanar reformatting images (MPR) obtained from rotational acquisitions with cone-beam computed tomography technology during percutaneous extra-vascular spinal procedures performed in the angiography suite. We used a 3D rotational angiographic unit with a flat panel detector. MPR images were obtained from a rotational acquisition of 8 s (240 images at 30 fps), tube rotation of 180 and after post-processing of 5 s by a local work-station. Multislice CT (MSCT) is the best guidance system for spinal approaches permitting direct tomographic visualization of each spinal structure. Many operators, however, are trained with fluoroscopy, it is less expensive, allows real-time guidance, and in many centers the angiography suite is more frequently available for percutaneous procedures. We present our 6-year experience in fluoroscopy-guided spinal procedures, which were performed under different conditions using MPR images. We illustrate cases of vertebroplasty, epidural injections, selective foraminal nerve root block, facet block, percutaneous treatment of disc herniation and spine biopsy, all performed with the help of MPR images for guidance and control in the event of difficult or anatomically complex access. The integrated use of ''CT-like'' MPR images allows the execution of spinal procedures under fluoroscopy guidance alone in all cases of dorso-lumbar access, with evident limitation of risks and complications, and without need for recourse to MSCT guidance, thus eliminating CT-room time (often bearing high diagnostic charges), and avoiding organizational problems for procedures that need, for example, combined use of a C-arm in the CT room. (orig.)

  6. C-arm CT-guided 3D navigation of percutaneous interventions; C-Bogen-CT-unterstuetzte 3D-Navigation perkutaner Interventionen

    Energy Technology Data Exchange (ETDEWEB)

    Becker, H.C.; Meissner, O.; Waggershauser, T. [Klinikum der Ludwig-Maximilians-Universitaet Muenchen, Campus Grosshadern, Institut fuer Klinische Radiologie, Muenchen (Germany)

    2009-09-15

    So far C-arm CT images were predominantly used for a precise guidance of an endovascular or intra-arterial therapy. A novel combined 3D-navigation C-arm system now also allows cross-sectional and fluoroscopy controlled interventions. Studies have reported about successful CT-image guided navigation with C-arm systems in vertebroplasty. Insertion of the radiofrequency ablation probe is also conceivable for lung and liver tumors that had been labelled with lipiodol. In the future C-arm CT based navigation systems will probably allow simplified and safer complex interventions and simultaneously reduce radiation exposure. (orig.) [German] Bisher wurden CT-Aufnahmen von einem rotierenden C-Bogen-System v. a. fuer die gezielte Unterstuetzung endovaskulaerer und intraarterieller Interventionen verwendet. Mit einer neuen kombinierten 3D-C-Bogen-Navigationseinheit ist es jetzt aber auch moeglich, perkutane Interventionen mit einem C-Bogen-System unter Schichtbild- und fluoroskopischer Kontrolle durchzufuehren. In Studien wird ueber erfolgreiche CT-Bild-gefuehrte Navigationen bei Vertebroplastien mit einem C-Bogen-System berichtet. Vorstellbar ist aber auch das Einbringen von Radiofrequenzsonden in Tumoren von Lunge und Leber, die bereits intraarteriell mit Lipiodol markiert wurden. Voraussichtlich koennen C-Bogen-CT-basierte Navigationssysteme in Zukunft komplexe Interventionen einfacher und sicherer machen und dabei gleichzeitig die Strahlenexposition reduzieren. (orig.)

  7. High resolution 3D imaging of synchrotron generated microbeams

    Energy Technology Data Exchange (ETDEWEB)

    Gagliardi, Frank M., E-mail: frank.gagliardi@wbrc.org.au [Alfred Health Radiation Oncology, The Alfred, Melbourne, Victoria 3004, Australia and School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia); Cornelius, Iwan [Imaging and Medical Beamline, Australian Synchrotron, Clayton, Victoria 3168, Australia and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales 2500 (Australia); Blencowe, Anton [Division of Health Sciences, School of Pharmacy and Medical Sciences, The University of South Australia, Adelaide, South Australia 5000, Australia and Division of Information Technology, Engineering and the Environment, Mawson Institute, University of South Australia, Mawson Lakes, South Australia 5095 (Australia); Franich, Rick D. [School of Applied Sciences and Health Innovations Research Institute, RMIT University, Melbourne, Victoria 3000 (Australia); Geso, Moshi [School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia)

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  8. Preparing diagnostic 3D images for image registration with planning CT images

    International Nuclear Information System (INIS)

    Purpose: Pre-radiotherapy (pre-RT) tomographic images acquired for diagnostic purposes often contain important tumor and/or normal tissue information which is poorly defined or absent in planning CT images. Our two years of clinical experience has shown that computer-assisted 3D registration of pre-RT images with planning CT images often plays an indispensable role in accurate treatment volume definition. Often the only available format of the diagnostic images is film from which the original 3D digital data must be reconstructed. In addition, any digital data, whether reconstructed or not, must be put into a form suitable for incorporation into the treatment planning system. The purpose of this investigation was to identify all problems that must be overcome before this data is suitable for clinical use. Materials and Methods: In the past two years we have 3D-reconstructed 300 diagnostic images from film and digital sources. As a problem was discovered we built a software tool to correct it. In time we collected a large set of such tools and found that they must be applied in a specific order to achieve the correct reconstruction. Finally, a toolkit (ediScan) was built that made all these tools available in the proper manner via a pleasant yet efficient mouse-based user interface. Results: Problems we discovered included different magnifications, shifted display centers, non-parallel image planes, image planes not perpendicular to the long axis of the table-top (shearing), irregularly spaced scans, non contiguous scan volumes, multiple slices per film, different orientations for slice axes (e.g. left-right reversal), slices printed at window settings corresponding to tissues of interest for diagnostic purposes, and printing artifacts. We have learned that the specific steps to correct these problems, in order of application, are: Also, we found that fast feedback and large image capacity (at least 2000 x 2000 12-bit pixels) are essential for practical application

  9. ROIC for gated 3D imaging LADAR receiver

    Science.gov (United States)

    Chen, Guoqiang; Zhang, Junling; Wang, Pan; Zhou, Jie; Gao, Lei; Ding, Ruijun

    2013-09-01

    Time of flight laser range finding, deep space communications and scanning video imaging are three applications requiring very low noise optical receivers to achieve detection of fast and weak optical signal. HgCdTe electrons initiated avalanche photodiodes (e-APDs) in linear multiplication mode is the detector of choice thanks to its high quantum efficiency, high gain at low bias, high bandwidth and low noise factor. In this project, a readout integrated circuit of hybrid e-APD focal plane array (FPA) with 100um pitch for 3D-LADAR was designed for gated optical receiver. The ROIC works at 77K, including unit cell circuit, column-level circuit, timing control, bias circuit and output driver. The unit cell circuit is a key component, which consists of preamplifier, correlated double Sampling (CDS), bias circuit and timing control module. Specially, the preamplifier used the capacitor feedback transimpedance amplifier (CTIA) structure which has two capacitors to offer switchable capacitance for passive/active dual mode imaging. The main circuit of column-level circuit is a precision Multiply-by-Two circuit which is implemented by switched-capacitor circuit. Switched-capacitor circuit is quite suitable for the signal processing of readout integrated circuit (ROIC) due to the working characteristics. The output driver uses a simply unity-gain buffer. Because the signal is amplified in column-level circuit, the amplifier in unity-gain buffer uses a rail-rail amplifier. In active imaging mode, the integration time is 80ns. Integrating current from 200nA to 4uA, this circuit shows the nonlinearity is less than 1%. In passive imaging mode, the integration time is 150ns. Integrating current from 1nA to 20nA shows the nonlinearity less than 1%.

  10. 3D Seismic Imaging over a Potential Collapse Structure

    Science.gov (United States)

    Gritto, Roland; O'Connell, Daniel; Elobaid Elnaiem, Ali; Mohamed, Fathelrahman; Sadooni, Fadhil

    2016-04-01

    The Middle-East has seen a recent boom in construction including the planning and development of complete new sub-sections of metropolitan areas. Before planning and construction can commence, however, the development areas need to be investigated to determine their suitability for the planned project. Subsurface parameters such as the type of material (soil/rock), thickness of top soil or rock layers, depth and elastic parameters of basement, for example, comprise important information needed before a decision concerning the suitability of the site for construction can be made. A similar problem arises in environmental impact studies, when subsurface parameters are needed to assess the geological heterogeneity of the subsurface. Environmental impact studies are typically required for each construction project, particularly for the scale of the aforementioned building boom in the Middle East. The current study was conducted in Qatar at the location of a future highway interchange to evaluate a suite of 3D seismic techniques in their effectiveness to interrogate the subsurface for the presence of karst-like collapse structures. The survey comprised an area of approximately 10,000 m2 and consisted of 550 source- and 192 receiver locations. The seismic source was an accelerated weight drop while the geophones consisted of 3-component 10 Hz velocity sensors. At present, we analyzed over 100,000 P-wave phase arrivals and performed high-resolution 3-D tomographic imaging of the shallow subsurface. Furthermore, dispersion analysis of recorded surface waves will be performed to obtain S-wave velocity profiles of the subsurface. Both results, in conjunction with density estimates, will be utilized to determine the elastic moduli of the subsurface rock layers.

  11. 3D imaging of enzymes working in situ.

    Science.gov (United States)

    Jamme, F; Bourquin, D; Tawil, G; Viksø-Nielsen, A; Buléon, A; Réfrégiers, M

    2014-06-01

    Today, development of slowly digestible food with positive health impact and production of biofuels is a matter of intense research. The latter is achieved via enzymatic hydrolysis of starch or biomass such as lignocellulose. Free label imaging, using UV autofluorescence, provides a great tool to follow one single enzyme when acting on a non-UV-fluorescent substrate. In this article, we report synchrotron DUV fluorescence in 3-dimensional imaging to visualize in situ the diffusion of enzymes on solid substrate. The degradation pathway of single starch granules by two amylases optimized for biofuel production and industrial starch hydrolysis was followed by tryptophan autofluorescence (excitation at 280 nm, emission filter at 350 nm). The new setup has been specially designed and developed for a 3D representation of the enzyme-substrate interaction during hydrolysis. Thus, this tool is particularly effective for improving knowledge and understanding of enzymatic hydrolysis of solid substrates such as starch and lignocellulosic biomass. It could open up the way to new routes in the field of green chemistry and sustainable development, that is, in biotechnology, biorefining, or biofuels. PMID:24796213

  12. Complex adaptation-based LDR image rendering for 3D image reconstruction

    Science.gov (United States)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  13. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    Science.gov (United States)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  14. Fast 3D T1-weighted brain imaging at 3 Tesla with modified 3D FLASH sequence

    International Nuclear Information System (INIS)

    Longitudinal relaxation times (T1) of white and gray matter become close at high magnetic field. Therefore, classical T1 sensitive methods, like spoiled FLASH fail to give a sufficient contrast in human brain imaging at 3 Tesla. An excellent T1 contrast can be achieved at high field by gradient echo imaging with a preparatory inversion pulse. The inversion recovery (IR) preparation can be combined with a fast 2D gradient echo scans. In this paper we present an application of this technique to rapid 3-dimensional imaging. New technique called 3D SIR FLASH was implemented on Burker MSLX system equipped with a 3T, 90 cm horizontal bore magnet working in Centre Hospitalier in Rouffach, France. The new technique was used for comparison of MRI images of healthy volunteers obtained with a traditional 3D imaging. White and gray matter are clearly distinguishable when 3D SIR FLASH is used. The total acquisition time for 128x128x128 image was 5 minutes. Three dimensional visualization with facet representation of surfaces and oblique sections was done off-line on the INDIGO Extreme workstation. New technique is widely used in FORENAP, Centre Hospitalier in Reuffach, Alsace. (author)

  15. Optimized 3D Street Scene Reconstruction from Driving Recorder Images

    Directory of Open Access Journals (Sweden)

    Yongjun Zhang

    2015-07-01

    Full Text Available The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building.

  16. Analysis and dynamic 3D visualization of cerebral blood flow combining 3D and 4D MR image sequences

    Science.gov (United States)

    Forkert, Nils Daniel; Säring, Dennis; Fiehler, Jens; Illies, Till; Möller, Dietmar; Handels, Heinz

    2009-02-01

    In this paper we present a method for the dynamic visualization of cerebral blood flow. Spatio-temporal 4D magnetic resonance angiography (MRA) image datasets and 3D MRA datasets with high spatial resolution were acquired for the analysis of arteriovenous malformations (AVMs). One of the main tasks is the combination of the information of the 3D and 4D MRA image sequences. Initially, in the 3D MRA dataset the vessel system is segmented and a 3D surface model is generated. Then, temporal intensity curves are analyzed voxelwise in the 4D MRA image sequences. A curve fitting of the temporal intensity curves to a patient individual reference curve is used to extract the bolus arrival times in the 4D MRA sequences. After non-linear registration of both MRA datasets the extracted hemodynamic information is transferred to the surface model where the time points of inflow can be visualized color coded dynamically over time. The dynamic visualizations computed using the curve fitting method for the estimation of the bolus arrival times were rated superior compared to those computed using conventional approaches for bolus arrival time estimation. In summary the procedure suggested allows a dynamic visualization of the individual hemodynamic situation and better understanding during the visual evaluation of cerebral vascular diseases.

  17. 3D meshes of carbon nanotubes guide functional reconnection of segregated spinal explants.

    Science.gov (United States)

    Usmani, Sadaf; Aurand, Emily Rose; Medelin, Manuela; Fabbro, Alessandra; Scaini, Denis; Laishram, Jummi; Rosselli, Federica B; Ansuini, Alessio; Zoccolan, Davide; Scarselli, Manuela; De Crescenzi, Maurizio; Bosi, Susanna; Prato, Maurizio; Ballerini, Laura

    2016-07-01

    In modern neuroscience, significant progress in developing structural scaffolds integrated with the brain is provided by the increasing use of nanomaterials. We show that a multiwalled carbon nanotube self-standing framework, consisting of a three-dimensional (3D) mesh of interconnected, conductive, pure carbon nanotubes, can guide the formation of neural webs in vitro where the spontaneous regrowth of neurite bundles is molded into a dense random net. This morphology of the fiber regrowth shaped by the 3D structure supports the successful reconnection of segregated spinal cord segments. We further observed in vivo the adaptability of these 3D devices in a healthy physiological environment. Our study shows that 3D artificial scaffolds may drive local rewiring in vitro and hold great potential for the development of future in vivo interfaces. PMID:27453939

  18. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    Energy Technology Data Exchange (ETDEWEB)

    Wong, S.T.C. [Univ. of California, San Francisco, CA (United States)

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  19. Evaluation of a System for High-Accuracy 3D Image-Based Registration of Endoscopic Video to C-Arm Cone-Beam CT for Image-Guided Skull Base Surgery

    OpenAIRE

    Mirota, Daniel J.; Uneri, Ali; Schafer, Sebastian; Nithiananthan, Sajendra; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; TAYLOR, RUSSELL H.; Hager, Gregory D.; Siewerdsen, Jeffrey H.

    2013-01-01

    The safety of endoscopic skull base surgery can be enhanced by accurate navigation in preoperative computed tomography (CT) or, more recently, intraoperative cone-beam CT (CBCT). The ability to register real-time endoscopic video with CBCT offers an additional advantage by rendering information directly within the visual scene to account for intraoperative anatomical change. However, tracker localization error (~ 1–2 mm) limits the accuracy with which video and tomographic images can be regis...

  20. 3D image analysis of a volcanic deposit

    Science.gov (United States)

    de Witte, Y.; Vlassenbroeck, J.; Vandeputte, K.; Dewanckele, J.; Cnudde, V.; van Hoorebeke, L.; Ernst, G.; Jacobs, P.

    2009-04-01

    During the last decades, X-ray micro CT has become a well established technique for non-destructive testing in a wide variety of research fields. Using a series of X-ray transmission images of the sample at different projection angles, a stack of 2D cross-sections is reconstructed, resulting in a 3D volume representing the X-ray attenuation coefficients of the sample. Since the attenuation coefficient of a material depends on its density and atomic number, this volume provides valuable information about the internal structure and composition of the sample. Although much qualitative information can be derived directly from this 3D volume, researchers usually require more quantitative results to be able to provide a full characterization of the sample under investigation. This type of information needs to be retrieved using specialized image processing software. For most samples, it is imperative that this processing is performed on the 3D volume as a whole, since a sequence of 2D cross sections usually forms an inadequate approximation of the actual structure. The complete processing of a volume consists of three sequential steps. First, the volume is segmented into a set of objects. What these objects represent depends on what property of the sample needs to be analysed. The objects can be for instance concavities, dense inclusions or the matrix of the sample. When dealing with noisy data, it might be necessary to filter the data before applying the segmentation. The second step is the separation of connected objects into a set of smaller objects. This is necessary when objects appear to be connected because of the limited resolution and contrast of the scan. Separation can also be useful when the sample contains a network structure and one wants to study the individual cells of the network. The third and last step consists of the actual analysis of the various objects to derive the different parameters of interest. While some parameters require extensive

  1. Computational ghost imaging versus imaging laser radar for 3D imaging

    CERN Document Server

    Hardy, Nicholas D

    2012-01-01

    Ghost imaging has been receiving increasing interest for possible use as a remote-sensing system. There has been little comparison, however, between ghost imaging and the imaging laser radars with which it would be competing. Toward that end, this paper presents a performance comparison between a pulsed, computational ghost imager and a pulsed, floodlight-illumination imaging laser radar. Both are considered for range-resolving (3D) imaging of a collection of rough-surfaced objects at standoff ranges in the presence of atmospheric turbulence. Their spatial resolutions and signal-to-noise ratios are evaluated as functions of the system parameters, and these results are used to assess each system's performance trade-offs. Scenarios in which a reflective ghost-imaging system has advantages over a laser radar are identified.

  2. Orthodontic treatment plan changed by 3D images

    International Nuclear Information System (INIS)

    Clinical application of CBCT is most often enforced in dental phenomenon of impacted teeth, hyperodontia, transposition, ankyloses or root resorption and other pathologies in the maxillofacial area. The goal, we put ourselves, is to show how the information from 3D images changes the protocol of the orthodontic treatment. The material, we presented six our clinical cases and the change in the plan of the treatment, which has used after analyzing the information carried on the three planes of CBCT. These cases are casuistic in the orthodontic practice and require individual approach to each of them during their analysis and decision taken. The discussion made by us is in line with reveal of the impacted teeth, where we need to evaluate their vertical depth and mediodistal ratios with the bond structures. At patients with hyperodontia, the assessment is of outmost importance to decide which of the teeth to be extracted and which one to be arranged into the dental arch. The conclusion we make is that diagnostic information is essential for decisions about treatment plan. The exact graphs will lead to better treatment plan and more predictable results. (authors) Key words: CBCT. IMPACTED CANINES. HYPERODONTIA. TRANSPOSITION

  3. Automated 3D renal segmentation based on image partitioning

    Science.gov (United States)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  4. Investigating the guiding of streamers in nitrogen/oxygen mixtures with 3D simulations

    Science.gov (United States)

    Teunissen, Jannis; Nijdam, Sander; Takahashi, Eiichi; Ebert, Ute

    2014-10-01

    Recent experiments by S. Nijdam and E. Takahashi have demonstrated that streamers can be guided by weak pre-ionization in nitrogen/oxygen mixtures, as long as there is not too much oxygen (less than 1%). The pre-ionization was created by a laser beam, and was orders of magnitude lower than the density in a streamer channel. Here, we will study the guiding of streamers with 3D numerical simulations. First, we present simulations that can be compared with the experiments and confirm that the laser pre-ionization does not introduce space charge effects by itself. Then we investigate topics as: the conditions under which guiding can occur; how photoionization reduces the guiding at higher oxygen concentrations and whether guided streamers keep their propagation direction outside the pre-ionization. JT was supported by STW Project 10755, SN by the FY2012 Researcher Exchange Program between JSPS and NWO, and ET by JSPS KAKENHI Grant Number 24560249.

  5. A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images

    OpenAIRE

    Sturm, Peter; Maybank, Steve

    1999-01-01

    International audience We present an approach for 3D reconstruction of objects from a single image. Obviously, constraints on the 3D structure are needed to perform this task. Our approach is based on user-provided coplanarity, perpendicularity and parallelism constraints. These are used to calibrate the image and perform 3D reconstruction. The method is described in detail and results are provided.

  6. Dynamic 3D cell rearrangements guided by a fibronectin matrix underlie somitogenesis.

    Directory of Open Access Journals (Sweden)

    Gabriel G Martins

    Full Text Available Somites are transient segments formed in a rostro-caudal progression during vertebrate development. In chick embryos, segmentation of a new pair of somites occurs every 90 minutes and involves a mesenchyme-to-epithelium transition of cells from the presomitic mesoderm. Little is known about the cellular rearrangements involved, and, although it is known that the fibronectin extracellular matrix is required, its actual role remains elusive. Using 3D and 4D imaging of somite formation we discovered that somitogenesis consists of a complex choreography of individual cell movements. Epithelialization starts medially with the formation of a transient epithelium of cuboidal cells, followed by cell elongation and reorganization into a pseudostratified epithelium of spindle-shaped epitheloid cells. Mesenchymal cells are then recruited to this medial epithelium through accretion, a phenomenon that spreads to all sides, except the lateral side of the forming somite, which epithelializes by cell elongation and intercalation. Surprisingly, an important contribution to the somite epithelium also comes from the continuous egression of mesenchymal cells from the core into the epithelium via its apical side. Inhibition of fibronectin matrix assembly first slows down the rate, and then halts somite formation, without affecting pseudopodial activity or cell body movements. Rather, cell elongation, centripetal alignment, N-cadherin polarization and egression are impaired, showing that the fibronectin matrix plays a role in polarizing and guiding the exploratory behavior of somitic cells. To our knowledge, this is the first 4D in vivo recording of a full mesenchyme-to-epithelium transition. This approach brought new insights into this event and highlighted the importance of the extracellular matrix as a guiding cue during morphogenesis.

  7. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    Science.gov (United States)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  8. Terahertz Quantum Cascade Laser Based 3D Imaging Project

    Data.gov (United States)

    National Aeronautics and Space Administration — LongWave Photonics proposes a terahertz quantum-cascade laser based swept-source optical coherence tomography (THz SS-OCT) system for single-sided, 3D,...

  9. Holographic Image Plane Projection Integral 3D Display

    Data.gov (United States)

    National Aeronautics and Space Administration — In response to NASA's need for a 3D virtual reality environment providing scientific data visualization without special user devices, Physical Optics Corporation...

  10. GammaModeler 3-D gamma-ray imaging technology

    International Nuclear Information System (INIS)

    The 3-D GammaModelertrademark system was used to survey a portion of the facility and provide 3-D visual and radiation representation of contaminated equipment located within the facility. The 3-D GammaModelertrademark system software was used to deconvolve extended sources into a series of point sources, locate the positions of these sources in space and calculate the 30 cm. dose rates for each of these sources. Localization of the sources in three dimensions provides information on source locations interior to the visual objects and provides a better estimate of the source intensities. The three dimensional representation of the objects can be made transparent in order to visualize sources located within the objects. Positional knowledge of all the sources can be used to calculate a map of the radiation in the canyon. The use of 3-D visual and gamma ray information supports improved planning decision-making, and aids in communications with regulators and stakeholders

  11. 3-D Reconstruction of Medical Image Using Wavelet Transform and Snake Model

    Directory of Open Access Journals (Sweden)

    Jinyong Cheng

    2009-12-01

    Full Text Available Medical image segmentation is an important step in 3-D reconstruction, and 3-D reconstruction from medical images is an important application of computer graphics and biomedicine image processing. An improved image segmentation method which is suitable for 3-D reconstruction is presented in this paper. A 3-D reconstruction algorithm is used to reconstruct the 3-D model from medical images. Rough edge is obtained by multi-scale wavelet transform at first. With the rough edge, improved gradient vector flow snake model is used and the object contour in the image is found. In the experiments, we reconstruct 3-D models of kidney, liver and brain putamen. The performances of the experiments indicate that the new algorithm can produce accurate 3-D reconstruction.

  12. 3-D Adaptive Sparsity Based Image Compression With Applications to Optical Coherence Tomography.

    Science.gov (United States)

    Fang, Leyuan; Li, Shutao; Kang, Xudong; Izatt, Joseph A; Farsiu, Sina

    2015-06-01

    We present a novel general-purpose compression method for tomographic images, termed 3D adaptive sparse representation based compression (3D-ASRC). In this paper, we focus on applications of 3D-ASRC for the compression of ophthalmic 3D optical coherence tomography (OCT) images. The 3D-ASRC algorithm exploits correlations among adjacent OCT images to improve compression performance, yet is sensitive to preserving their differences. Due to the inherent denoising mechanism of the sparsity based 3D-ASRC, the quality of the compressed images are often better than the raw images they are based on. Experiments on clinical-grade retinal OCT images demonstrate the superiority of the proposed 3D-ASRC over other well-known compression methods. PMID:25561591

  13. Superimposing of virtual graphics and real image based on 3D CAD information

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Proposes methods of transforming 3D CAD models into 2D graphics and recognizing 3D objects by features and superimposing VE built in computer onto real image taken by a CCD camera, and presents computer simulation results.

  14. 3-D Imaging Systems for Agricultural Applications-A Review.

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  15. 3-D Imaging Systems for Agricultural Applications—A Review

    Directory of Open Access Journals (Sweden)

    Manuel Vázquez-Arellano

    2016-04-01

    Full Text Available Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  16. 3-D Imaging Systems for Agricultural Applications—A Review

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  17. 3-D Imaging Systems for Agricultural Applications-A Review.

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  18. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    Science.gov (United States)

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  19. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    Science.gov (United States)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  20. Random Walk Based Segmentation for the Prostate on 3D Transrectal Ultrasound Images

    Science.gov (United States)

    Ma, Ling; Guo, Rongrong; Tian, Zhiqiang; Venkataraman, Rajesh; Sarkar, Saradwata; Liu, Xiabi; Nieh, Peter T.; Master, Viraj V.; Schuster, David M.; Fei, Baowei

    2016-01-01

    This paper proposes a new semi-automatic segmentation method for the prostate on 3D transrectal ultrasound images (TRUS) by combining the region and classification information. We use a random walk algorithm to express the region information efficiently and flexibly because it can avoid segmentation leakage and shrinking bias. We further use the decision tree as the classifier to distinguish the prostate from the non-prostate tissue because of its fast speed and superior performance, especially for a binary classification problem. Our segmentation algorithm is initialized with the user roughly marking the prostate and non-prostate points on the mid-gland slice which are fitted into an ellipse for obtaining more points. Based on these fitted seed points, we run the random walk algorithm to segment the prostate on the mid-gland slice. The segmented contour and the information from the decision tree classification are combined to determine the initial seed points for the other slices. The random walk algorithm is then used to segment the prostate on the adjacent slice. We propagate the process until all slices are segmented. The segmentation method was tested in 32 3D transrectal ultrasound images. Manual segmentation by a radiologist serves as the gold standard for the validation. The experimental results show that the proposed method achieved a Dice similarity coefficient of 91.37±0.05%. The segmentation method can be applied to 3D ultrasound-guided prostate biopsy and other applications.

  1. 3D MODELLING FROM UN CALIBRATED IMAGES – A COMPARATIVE STUDY

    Directory of Open Access Journals (Sweden)

    Limi V L

    2014-03-01

    Full Text Available 3D modeling is a demanding area of research. Creating a 3D world from sequence of images captured using different mobile cameras pose additional challenge in this field. We plan to explore this area of computer vision to model a 3D world of Indian heritage sites for virtual tourism. In this paper a comparative study of the existing methods used for 3D reconstruction of un-calibrated image sequences was done. The study shows different scenario of modeling 3D objects from un-calibrated images which include community photo collection, images taken from unknown camera, 3D modeling using two un-calibrated images, etc. Hence the different methods available were studied and an overall view of the techniques used in each step of 3D reconstruction was explored. The merits and demerits of each method were also compared.

  2. Multimodal Registration and Fusion for 3D Thermal Imaging

    Directory of Open Access Journals (Sweden)

    Moulay A. Akhloufi

    2015-01-01

    Full Text Available 3D vision is an area of computer vision that has attracted a lot of research interest and has been widely studied. In recent years we witness an increasing interest from the industrial community. This interest is driven by the recent advances in 3D technologies, which enable high precision measurements at an affordable cost. With 3D vision techniques we can conduct advanced manufactured parts inspections and metrology analysis. However, we are not able to detect subsurface defects. This kind of detection is achieved by other techniques, like infrared thermography. In this work, we present a new registration framework for 3D and thermal infrared multimodal fusion. The resulting fused data can be used for advanced 3D inspection in Nondestructive Testing and Evaluation (NDT&E applications. The fusion permits the simultaneous visible surface and subsurface inspections to be conducted in the same process. Experimental tests were conducted with different materials. The obtained results are promising and show how these new techniques can be used efficiently in a combined NDT&E-Metrology analysis of manufactured parts, in areas such as aerospace and automotive.

  3. Implementation of wireless 3D stereo image capture system and 3D exaggeration algorithm for the region of interest

    Science.gov (United States)

    Ham, Woonchul; Song, Chulgyu; Lee, Kangsan; Badarch, Luubaatar

    2015-05-01

    In this paper, we introduce the mobile embedded system implemented for capturing stereo image based on two CMOS camera module. We use WinCE as an operating system and capture the stereo image by using device driver for CMOS camera interface and Direct Draw API functions. We aslo comments on the GPU hardware and CUDA programming for implementation of 3D exaggeraion algorithm for ROI by adjusting and synthesizing the disparity value of ROI (region of interest) in real time. We comment on the pattern of aperture for deblurring of CMOS camera module based on the Kirchhoff diffraction formula and clarify the reason why we can get more sharp and clear image by blocking some portion of aperture or geometric sampling. Synthesized stereo image is real time monitored on the shutter glass type three-dimensional LCD monitor and disparity values of each segment are analyzed to prove the validness of emphasizing effect of ROI.

  4. 3D/2D image registration using weighted histogram of gradient directions

    Science.gov (United States)

    Ghafurian, Soheil; Hacihaliloglu, Ilker; Metaxas, Dimitris N.; Tan, Virak; Li, Kang

    2015-03-01

    Three dimensional (3D) to two dimensional (2D) image registration is crucial in many medical applications such as image-guided evaluation of musculoskeletal disorders. One of the key problems is to estimate the 3D CT- reconstructed bone model positions (translation and rotation) which maximize the similarity between the digitally reconstructed radiographs (DRRs) and the 2D fluoroscopic images using a registration method. This problem is computational-intensive due to a large search space and the complicated DRR generation process. Also, finding a similarity measure which converges to the global optimum instead of local optima adds to the challenge. To circumvent these issues, most existing registration methods need a manual initialization, which requires user interaction and is prone to human error. In this paper, we introduce a novel feature-based registration method using the weighted histogram of gradient directions of images. This method simplifies the computation by searching the parameter space (rotation and translation) sequentially rather than simultaneously. In our numeric simulation experiments, the proposed registration algorithm was able to achieve sub-millimeter and sub-degree accuracies. Moreover, our method is robust to the initial guess. It can tolerate up to +/-90°rotation offset from the global optimal solution, which minimizes the need for human interaction to initialize the algorithm.

  5. Imaging 3D strain field monitoring during hydraulic fracturing processes

    Science.gov (United States)

    Chen, Rongzhang; Zaghloul, Mohamed A. S.; Yan, Aidong; Li, Shuo; Lu, Guanyi; Ames, Brandon C.; Zolfaghari, Navid; Bunger, Andrew P.; Li, Ming-Jun; Chen, Kevin P.

    2016-05-01

    In this paper, we present a distributed fiber optic sensing scheme to study 3D strain fields inside concrete cubes during hydraulic fracturing process. Optical fibers embedded in concrete were used to monitor 3D strain field build-up with external hydraulic pressures. High spatial resolution strain fields were interrogated by the in-fiber Rayleigh backscattering with 1-cm spatial resolution using optical frequency domain reflectometry. The fiber optics sensor scheme presented in this paper provides scientists and engineers a unique laboratory tool to understand the hydraulic fracturing processes in various rock formations and its impacts to environments.

  6. Quantitative 3-D imaging topogrammetry for telemedicine applications

    Science.gov (United States)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  7. Image Reconstruction from 2D stack of MRI/CT to 3D using Shapelets

    OpenAIRE

    Arathi T; Latha Parameswaran

    2014-01-01

    Image reconstruction is an active research field, due to the increasing need for geometric 3D models in movie industry, games, virtual environments and in medical fields. 3D image reconstruction aims to arrive at the 3D model of an object, from its 2D images taken at different viewing angles. Medical images are multimodal, which includes MRI, CT scan image, PET and SPECT images. Of these, MRI and CT scan images of an organ when taken, is available as a stack of 2D images, taken at different a...

  8. Display of travelling 3D scenes from single integral-imaging capture

    Science.gov (United States)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  9. Statistical skull models from 3D X-ray images

    CERN Document Server

    Berar, M; Bailly, G; Payan, Y; Berar, Maxime; Desvignes, Michel; Payan, Yohan

    2006-01-01

    We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes of the skull and the mandible are high-density meshes, extracted from 3D CT scans. All our patient-specific meshes are registrated in a subject-shared reference system using our 3D-to-3D elastic matching algorithm. Registration is based upon the minimization of a distance between the high density mesh and a shared low density mesh, defined on the vertexes, in a multi resolution approach. A Principal Component analysis is performed on the normalised registrated data to build a statistical linear model of the skull and mandible shape variation. The accuracy of the reconstruction is under the millimetre in the shape...

  10. Monopulse radar 3-D imaging and application in terminal guidance radar

    Science.gov (United States)

    Xu, Hui; Qin, Guodong; Zhang, Lina

    2007-11-01

    Monopulse radar 3-D imaging integrates ISAR, monopulse angle measurement and 3-D imaging processing to obtain the 3-D image which can reflect the real size of a target, which means any two of the three measurement parameters, namely azimuth difference beam elevation difference beam and radial range, can be used to form 3-D image of 3-D object. The basic principles of Monopulse radar 3-D imaging are briefly introduced, the effect of target carriage changes(including yaw, pitch, roll and movement of target itself) on 3-D imaging and 3-D moving compensation based on the chirp rate μ and Doppler frequency f d are analyzed, and the application of monopulse radar 3-D imaging to terminal guidance radars is forecasted. The computer simulation results show that monopulse radar 3-D imaging has apparent advantages in distinguishing a target from overside interference and precise assault on vital part of a target, and has great importance in terminal guidance radars.

  11. Optimization of spine surgery planning with 3D image templating tools

    Science.gov (United States)

    Augustine, Kurt E.; Huddleston, Paul M.; Holmes, David R., III; Shridharani, Shyam M.; Robb, Richard A.

    2008-03-01

    The current standard of care for patients with spinal disorders involves a thorough clinical history, physical exam, and imaging studies. Simple radiographs provide a valuable assessment but prove inadequate for surgery planning because of the complex 3-dimensional anatomy of the spinal column and the close proximity of the neural elements, large blood vessels, and viscera. Currently, clinicians still use primitive techniques such as paper cutouts, pencils, and markers in an attempt to analyze and plan surgical procedures. 3D imaging studies are routinely ordered prior to spine surgeries but are currently limited to generating simple, linear and angular measurements from 2D views orthogonal to the central axis of the patient. Complex spinal corrections require more accurate and precise calculation of 3D parameters such as oblique lengths, angles, levers, and pivot points within individual vertebra. We have developed a clinician friendly spine surgery planning tool which incorporates rapid oblique reformatting of each individual vertebra, followed by interactive templating for 3D placement of implants. The template placement is guided by the simultaneous representation of multiple 2D section views from reformatted orthogonal views and a 3D rendering of individual or multiple vertebrae enabling superimposition of virtual implants. These tools run efficiently on desktop PCs typically found in clinician offices or workrooms. A preliminary study conducted with Mayo Clinic spine surgeons using several actual cases suggests significantly improved accuracy of pre-operative measurements and implant localization, which is expected to increase spinal procedure efficiency and safety, and reduce time and cost of the operation.

  12. 3D CT Image-Guided Parallel Mechanism-Assisted Femur Fracture Reduction%3维CT图像导航的并联机构辅助股骨复位方法

    Institute of Scientific and Technical Information of China (English)

    龚敏丽; 徐颖; 唐佩福; 胡磊; 杜海龙; 吕振天; 姚腾洲

    2011-01-01

    Traditionally, the clinical femur fracture reduction surgery is imperfect and often results in misalignment and high intraoperative radiation exposures.To solve the problem, a method for fracture reduction based on preoperative CT image and 6 degrees of freedom parallel mechanism which is fixed to unhealthy femur is proposed.Based on the body's symmetry principle, the method uses contralateral femur as a standard to guide the reduction of femoral shaft fractures.By twelve markers on the implementing machine,the computer can calculate the length of pole in the virtual space in real time.Finally, animal bones experiments show the effectiveness of the approach.%临床上传统股骨复位手术存在复位精确度不高、辐射量大等不足.针对以上不足,提出一种基于术前CT图像引导,由固定于股骨患侧上的6自由度并联机构辅助股骨复位的方法.该方法基于人体的对称性原理,用患者健侧股骨镜像作为标准,指导患侧股骨复位:通过执行机构上的12个标记点,在虚拟空间中实时标记执行机构上每根杆的长度.动物骨实验验证了此复位方法的有效性.

  13. 360 degree realistic 3D image display and image processing from real objects

    Science.gov (United States)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-09-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  14. Comparison of circumferential pulmonary vein anatomy mapping guided by 3D mapping versus a mesh mapping catheter

    Institute of Scientific and Technical Information of China (English)

    Yi-Wen Yan a; Gang Chen a; Feng Zhang; Song-Wen Chen; Wei-Dong Meng; Shao-Wen Liu

    2015-01-01

    Objective: Catheter-based pulmonary vein isolation (PVI) is an established therapy for paroxysmal atrial fibrillation. The high-density mesh mapper (HDMM) guides circumferential PV-atrium isolation without the 3D electroanatomic mapping. This study aims to compare circumferential pulmonary vein (CPV) anatomy mapping between guiding by a 3D mapping system and the HDMM. Methods: Forty-four consecutive patients with paroxysmal atrial fibrillation were scheduled for a first procedure for PVI. A CPV ostial anatomy map guided by HDMM was set up in the CARTO system while the operator was blinded to the CARTO screen. Then CARTO-guided ipsilateral PV maps were obtained and PVI was performed. This established another set of CPV ostial anatomy maps. The differences between the two mapping images were compared and analyzed. Results: All 176 PVs in 44 patients could be mapped by both HDMM and CARTO. About 44.9%of the PV ostial anatomies were generally similar between the two different map images. The average point-to-point straight distance between the HDMM-guided map and the CARTO-guided map was 6.2 ± 1.4 mm. The area of the circumferential right PV (CRPV) in the HDMM map was larger than that in the CARTO map (P ¼ 0.013). After a mean follow-up of 18.3 ± 4.3 months (6e24 months), 72.7%of patients (32/44) were free of atrial arrhythmia without anti-arrhythmic drugs (AADs). Conclusion: Compared to the CARTO-guided CPV anatomy image, a highly similar figure could be achieved by mapping guided by the HDMM. (Clinical trial.gov number, ChiCTR-TNRC-11001390.) Copyright © 2015, Chinese Medical Association Production. Production and hosting by Elsevier B.V. on behalf of KeAi Communications Co., Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).

  15. Confocal Image 3D Surface Measurement with Optical Fiber Plate

    Institute of Scientific and Technical Information of China (English)

    WANG Zhao; ZHU Sheng-cheng; LI Bing; TAN Yu-shan

    2004-01-01

    A whole-field 3D surface measurement system for semiconductor wafer inspection is described.The system consists of an optical fiber plate,which can split the light beam into N2 subbeams to realize the whole-field inspection.A special prism is used to separate the illumination light and signal light.This setup is characterized by high precision,high speed and simple structure.

  16. Integration of 3D anatomical data obtained by CT imaging and 3D optical scanning for computer aided implant surgery

    Directory of Open Access Journals (Sweden)

    Paoli Alessandro

    2011-02-01

    Full Text Available Abstract Background A precise placement of dental implants is a crucial step to optimize both prosthetic aspects and functional constraints. In this context, the use of virtual guiding systems has been recognized as a fundamental tool to control the ideal implant position. In particular, complex periodontal surgeries can be performed using preoperative planning based on CT data. The critical point of the procedure relies on the lack of accuracy in transferring CT planning information to surgical field through custom-made stereo-lithographic surgical guides. Methods In this work, a novel methodology is proposed for monitoring loss of accuracy in transferring CT dental information into periodontal surgical field. The methodology is based on integrating 3D data of anatomical (impression and cast and preoperative (radiographic template models, obtained by both CT and optical scanning processes. Results A clinical case, relative to a fully edentulous jaw patient, has been used as test case to assess the accuracy of the various steps concurring in manufacturing surgical guides. In particular, a surgical guide has been designed to place implants in the bone structure of the patient. The analysis of the results has allowed the clinician to monitor all the errors, which have been occurring step by step manufacturing the physical templates. Conclusions The use of an optical scanner, which has a higher resolution and accuracy than CT scanning, has demonstrated to be a valid support to control the precision of the various physical models adopted and to point out possible error sources. A case study regarding a fully edentulous patient has confirmed the feasibility of the proposed methodology.

  17. GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.

    Science.gov (United States)

    Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H

    2012-09-01

    Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC

  18. A Pipeline for 3D Multimodality Image Integration and Computer-assisted Planning in Epilepsy Surgery

    OpenAIRE

    Nowell, Mark; Rodionov, Roman; Zombori, Gergely; Sparks, Rachel; Rizzi, Michele; Ourselin, Sebastien; Miserocchi, Anna; McEvoy, Andrew; Duncan, John

    2016-01-01

    Epilepsy surgery is challenging and the use of 3D multimodality image integration (3DMMI) to aid presurgical planning is well-established. Multimodality image integration can be technically demanding, and is underutilised in clinical practice. We have developed a single software platform for image integration, 3D visualization and surgical planning. Here, our pipeline is described in step-by-step fashion, starting with image acquisition, proceeding through image co-registration, manual segmen...

  19. D3D augmented reality imaging system: proof of concept in mammography

    Science.gov (United States)

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Purpose The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D) augmented reality”. Materials and methods A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. PMID:27563261

  20. Flash trajectory imaging of target 3D motion

    Science.gov (United States)

    Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang

    2011-03-01

    We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.

  1. Weighted 3D GS Algorithm for Image-Qquality Improvement of Multi-Plane Holographic Display

    Institute of Scientific and Technical Information of China (English)

    李芳; 毕勇; 王皓; 孙敏远; 孔新新

    2012-01-01

    Theoretically,three-dimensional (3D) GS algorithm can realize 3D displays; however,correlation of the output image is restricted because of the interaction among multiple planes,thus failing to meet the image-quality requirements in practical applications.We introduce the weight factors and propose the weighted 3D GS algorithm,which can realize selective control of the correlation of multi-plane display based on the traditional 3D GS algorithm.Improvement in image quality is accomplished by the selection of appropriate weight factors.

  2. Generic precise augmented reality guiding system and its calibration method based on 3D virtual model.

    Science.gov (United States)

    Liu, Miao; Yang, Shourui; Wang, Zhangying; Huang, Shujun; Liu, Yue; Niu, Zhenqi; Zhang, Xiaoxuan; Zhu, Jigui; Zhang, Zonghua

    2016-05-30

    Augmented reality system can be applied to provide precise guidance for various kinds of manual works. The adaptability and guiding accuracy of such systems are decided by the computational model and the corresponding calibration method. In this paper, a novel type of augmented reality guiding system and the corresponding designing scheme are proposed. Guided by external positioning equipment, the proposed system can achieve high relative indication accuracy in a large working space. Meanwhile, the proposed system is realized with a digital projector and the general back projection model is derived with geometry relationship between digitized 3D model and the projector in free space. The corresponding calibration method is also designed for the proposed system to obtain the parameters of projector. To validate the proposed back projection model, the coordinate data collected by a 3D positioning equipment is used to calculate and optimize the extrinsic parameters. The final projecting indication accuracy of the system is verified with subpixel pattern projecting technique. PMID:27410124

  3. Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy

    OpenAIRE

    Gualda, Emilio J.; Simão, Daniel; Pinto, Catarina; Alves, Paula M.; Brito, Catarina

    2014-01-01

    The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human...

  4. Weighted guided image filtering.

    Science.gov (United States)

    Li, Zhengguo; Zheng, Jinghong; Zhu, Zijian; Yao, Wei; Wu, Shiqian

    2015-01-01

    It is known that local filtering-based edge preserving smoothing techniques suffer from halo artifacts. In this paper, a weighted guided image filter (WGIF) is introduced by incorporating an edge-aware weighting into an existing guided image filter (GIF) to address the problem. The WGIF inherits advantages of both global and local smoothing filters in the sense that: 1) the complexity of the WGIF is O(N) for an image with N pixels, which is same as the GIF and 2) the WGIF can avoid halo artifacts like the existing global smoothing filters. The WGIF is applied for single image detail enhancement, single image haze removal, and fusion of differently exposed images. Experimental results show that the resultant algorithms produce images with better visual quality and at the same time halo artifacts can be reduced/avoided from appearing in the final images with negligible increment on running times. PMID:25415986

  5. Infrared imaging of the polymer 3D-printing process

    Science.gov (United States)

    Dinwiddie, Ralph B.; Kunc, Vlastimil; Lindal, John M.; Post, Brian; Smith, Rachel J.; Love, Lonnie; Duty, Chad E.

    2014-05-01

    Both mid-wave and long-wave IR cameras are used to measure various temperature profiles in thermoplastic parts as they are printed. Two significantly different 3D-printers are used in this study. The first is a small scale commercially available Solidoodle 3 printer, which prints parts with layer thicknesses on the order of 125μm. The second printer used is a "Big Area Additive Manufacturing" (BAAM) 3D-printer developed at Oak Ridge National Laboratory. The BAAM prints parts with a layer thicknesses of 4.06 mm. Of particular interest is the temperature of the previously deposited layer as the new hot layer is about to be extruded onto it. The two layers are expected have a stronger bond if the temperature of the substrate layer is above the glass transition temperature. This paper describes the measurement technique and results for a study of temperature decay and substrate layer temperature for ABS thermoplastic with and without the addition of chopped carbon fibers.

  6. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images.

    Science.gov (United States)

    Wang, Jiheng; Rehman, Abdul; Zeng, Kai; Wang, Shiqi; Wang, Zhou

    2015-11-01

    Objective quality assessment of distorted stereoscopic images is a challenging problem, especially when the distortions in the left and right views are asymmetric. Existing studies suggest that simply averaging the quality of the left and right views well predicts the quality of symmetrically distorted stereoscopic images, but generates substantial prediction bias when applied to asymmetrically distorted stereoscopic images. In this paper, we first build a database that contains both single-view and symmetrically and asymmetrically distorted stereoscopic images. We then carry out a subjective test, where we find that the quality prediction bias of the asymmetrically distorted images could lean toward opposite directions (overestimate or underestimate), depending on the distortion types and levels. Our subjective test also suggests that eye dominance effect does not have strong impact on the visual quality decisions of stereoscopic images. Furthermore, we develop an information content and divisive normalization-based pooling scheme that improves upon structural similarity in estimating the quality of single-view images. Finally, we propose a binocular rivalry-inspired multi-scale model to predict the quality of stereoscopic images from that of the single-view images. Our results show that the proposed model, without explicitly identifying image distortion types, successfully eliminates the prediction bias, leading to significantly improved quality prediction of the stereoscopic images. PMID:26087491

  7. 3D Imaging of individual particles : a review

    OpenAIRE

    Pirard, Eric

    2012-01-01

    In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear disti...

  8. 3D IMAGING OF INDIVIDUAL PARTICLES: A REVIEW

    OpenAIRE

    Eric Pirard

    2012-01-01

    In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear distinction between the surfometric and volumetric principles. Th...

  9. 3D imaging of individual particles: a review:

    OpenAIRE

    Pirard, Eric

    2012-01-01

    In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear distinction between the surfometric and volumetric principles. Th...

  10. 3D Imaging in Heavy-Ion Reactions

    OpenAIRE

    Brown, David A.; Danielewicz, Pawel; Heffner, Mike; Soltz, Ron

    2004-01-01

    We report an extension of the source imaging method for imaging full three-dimensional sources from three-dimensional like-pair correlations. Our technique consists of expanding the correlation data and the underlying source function in spherical harmonics and inverting the resulting system of one-dimensional integral equations. With this method of attack, we can image the source function quickly, even with the extremely large data sets common in three-dimensional analyses. We apply our metho...

  11. MR imaging in epilepsy with use of 3D MP-RAGE

    International Nuclear Information System (INIS)

    The patients were 40 males and 33 females; their ages ranged from 1 month to 39 years (mean: 15.7 years). The patients underwent MR imaging, including spin-echo T1-weighted, turbo spin-echo proton density/T2-weighted, and 3D magnetization-prepared rapid gradient-echo (3D MP-RAGE) images. These examinations disclosed 39 focal abnormalities. On visual evaluation, the boundary of abnormal gray matter in the neuronal migration disorder (NMD) cases was most clealy shown on 3D MP-RAGE images as compared to the other images. This is considered to be due to the higher spatial resolution and the better contrast of the 3D MP-RAGE images than those of the other techniques. The relative contrast difference between abnormal gray matter and the adjacent white matter was also assessed. The results revealed that the contrast differences on the 3D MP-RAGE images were larger than those on the other images; this was statistically significant. Although the sensitivity of 3D MP-RAGE for NMD was not specifically evaluated in this study, the possibility of this disorder, in cases suspected on other images, could be ruled out. Thus, it appears that the specificity with respect to NMD was at least increased with us of 3D MP-RAGE. 3D MP-RAGE also enabled us to build three-dimensional surface models that were helpful in understanding the three-dimensional anatomy. Furthermore. 3D MP-RAGE was considered to be the best technique for evaluating hippocampus atrophy in patients with MTS. On the other hand, the sensitivity in the signal change of the hippocampus was higher on T2-weighted images. In addition, demonstration of cortical tubers of tuberous sclerosis in neurocutaneous syndrome was superior on T2-weighted images than on 3D MP-RAGE images. (K.H.)

  12. Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner

    International Nuclear Information System (INIS)

    We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated

  13. GOTHIC CHURCHES IN PARIS ST GERVAIS ET ST PROTAIS IMAGE MATCHING 3D RECONSTRUCTION TO UNDERSTAND THE VAULTS SYSTEM GEOMETRY

    Directory of Open Access Journals (Sweden)

    M. Capone

    2015-02-01

    benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  14. Spectroscopy and 3D imaging of the Crab nebula

    CERN Document Server

    Cadez, A; Vidrih, S

    2004-01-01

    Spectroscopy of the Crab nebula along different slit directions reveals the 3 dimensional structure of the optical nebula. On the basis of the linear radial expansion result first discovered by Trimble (1968), we make a 3D model of the optical emission. Results from a limited number of slit directions suggest that optical lines originate from a complicated array of wisps that are located in a rather thin shell, pierced by a jet. The jet is certainly not prominent in optical emission lines, but the direction of the piercing is consistent with the direction of the X-ray and radio jet. The shell's effective radius is ~ 79 seconds of arc, its thickness about a third of the radius and it is moving out with an average velocity 1160 km/s.

  15. Efficient RPG detection in noisy 3D image data

    Science.gov (United States)

    Pipitone, Frank

    2011-06-01

    We address the automatic detection of Ambush weapons such as rocket propelled grenades (RPGs) from range data which might be derived from multiple camera stereo with textured illumination or by other means. We describe our initial work in a new project involving the efficient acquisition of 3D scene data as well as discrete point invariant techniques to perform real time search for threats to a convoy. The shapes of the jump boundaries in the scene are exploited in this paper, rather than on-surface points, due to the large error typical of depth measurement at long range and the relatively high resolution obtainable in the transverse direction. We describe examples of the generation of a novel range-scaled chain code for detecting and matching jump boundaries.

  16. 3D Image Sensor based on Parallax Motion

    Directory of Open Access Journals (Sweden)

    Barna Reskó

    2007-12-01

    Full Text Available For humans and visual animals vision it is the primary and the most sophisticatedperceptual modality to get information about the surrounding world. Depth perception is apart of vision allowing to accurately determine the distance to an object which makes it animportant visual task. Humans have two eyes with overlapping visual fields that enablestereo vision and thus space perception. Some birds however do not have overlappingvisual fields, and compensate this lask by moving their heads, which in turn makes spaceperception possible using the motion parallax as a visual cue. This paper presents asolution using an opto-mechanical filter that was inspired by the way birds observe theirenvironment. The filtering is done using two different approaches:using motion blur duringmotion parallax, and using the optical flow algorithm. The two methods have differentadvantages and drawbacks, which will be discussed in the paper. The proposed system canbe used in robotics for 3D space perception.

  17. Feasibility of multimodal 3D neuroimaging to guide implantation of intracranial EEG electrodes

    OpenAIRE

    R. Rodionov; Vollmar, C.; Nowell, M.; Miserocchi, A; Wehner, T; Micallef, C; Zombori, G.; Ourselin, S; Diehl, B.; McEvoy, A.W.; Duncan, J S

    2013-01-01

    Summary Background Since intracranial electrode implantation has limited spatial sampling and carries significant risk, placement has to be effective and efficient. Structural and functional imaging of several different modalities contributes to localising the seizure onset zone (SoZ) and eloquent cortex. There is a need to summarise and present this information throughout the pre/intra/post-surgical course. Methods We developed and implemented a multimodal 3D neuroimaging (M3N) pipeline to g...

  18. Analysis of information for cerebrovascular disorders obtained by 3D MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yoshikawa, Kohki [Tokyo Univ. (Japan). Inst. of Medical Science; Yoshioka, Naoki; Watanabe, Fumio; Shiono, Takahiro; Sugishita, Morihiro; Umino, Kazunori

    1995-12-01

    Recently, it becomes easy to analyze information obtained by 3D MR imaging due to remarkable progress of fast MR imaging technique and analysis tool. Six patients suffered from aphasia (4 cerebral infarctions and 2 bleedings) were performed 3D MR imaging (3D FLASH-TR/TE/flip angle; 20-50 msec/6-10 msec/20-30 degrees) and their volume information were analyzed by multiple projection reconstruction (MPR), surface rendering 3D reconstruction, and volume rendering 3D reconstruction using Volume Design PRO (Medical Design Co., Ltd.). Four of them were diagnosed as Broca`s aphasia clinically and their lesions could be detected around the cortices of the left inferior frontal gyrus. Another 2 patients were diagnosed as Wernicke`s aphasia and the lesions could be detected around the cortices of the left supramarginal gyrus. This technique for 3D volume analyses would provide quite exact locational information about cerebral cortical lesions. (author).

  19. MULTI-SPECTRAL AND HYPERSPECTRAL IMAGE FUSION USING 3-D WAVELET TRANSFORM

    Institute of Scientific and Technical Information of China (English)

    Zhang Yifan; He Mingyi

    2007-01-01

    Image fusion is performed between one band of multi-spectral image and two bands of hyperspectral image to produce fused image with the same spatial resolution as source multi-spectral image and the same spectral resolution as source hyperspectral image. According to the characteristics and 3-Dimensional (3-D) feature analysis of multi-spectral and hyperspectral image data volume, the new fusion approach using 3-D wavelet based method is proposed. This approach is composed of four major procedures: Spatial and spectral resampling, 3-D wavelet transform, wavelet coefficient integration and 3-D inverse wavelet transform. Especially, a novel method, Ratio Image Based Spectral Resampling (RIBSR) method, is proposed to accomplish data resampling in spectral domain by utilizing the property of ratio image. And a new fusion rule, Average and Substitution (A&S) rule, is employed as the fusion rule to accomplish wavelet coefficient integration. Experimental results illustrate that the fusion approach using 3-D wavelet transform can utilize both spatial and spectral characteristics of source images more adequately and produce fused image with higher quality and fewer artifacts than fusion approach using 2-D wavelet transform. It is also revealed that RIBSR method is capable of interpolating the missing data more effectively and correctly, and A&S rule can integrate coefficients of source images in 3-D wavelet domain to preserve both spatial and spectral features of source images more properly.

  20. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Ferin, Guillaume;

    2012-01-01

    phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60 in both the azimuth and elevation direction and 150mm in depth. This results for both techniques in a frame rate of 18 Hz. The implemented synthetic aperture technique...

  1. The Mathematical Foundations of 3D Compton Scatter Emission Imaging

    Directory of Open Access Journals (Sweden)

    T. T. Truong

    2007-01-01

    Full Text Available The mathematical principles of tomographic imaging using detected (unscattered X- or gamma-rays are based on the two-dimensional Radon transform and many of its variants. In this paper, we show that two new generalizations, called conical Radon transforms, are related to three-dimensional imaging processes based on detected Compton scattered radiation. The first class of conical Radon transform has been introduced recently to support imaging principles of collimated detector systems. The second class is new and is closely related to the Compton camera imaging principles and invertible under special conditions. As they are poised to play a major role in future designs of biomedical imaging systems, we present an account of their most important properties which may be relevant for active researchers in the field.

  2. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.

    Science.gov (United States)

    Lin, Yu-Hsun; Wu, Ja-Ling

    2014-04-01

    The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images.

  3. In vivo 3D neuroanatomical evaluation of periprostatic nerve plexus with 3T-MR Diffusion Tensor Imaging

    International Nuclear Information System (INIS)

    Objectives: To evaluate if Diffusion Tensor Imaging technique (DTI) can improve the visualization of periprostatic nerve fibers describing the location and distribution of entire neurovascular plexus around the prostate in patients who are candidates for prostatectomy. Materials and methods: Magnetic Resonance Imaging (MRI), including a 2D T2-weighted FSE sequence in 3 planes, 3D T2-weighted and DTI using 16 gradient directions and b = 0 and 1000, was performed on 36 patients. Three out of 36 patients were excluded from the analysis due to poor image quality (blurring N = 2, artifact N = 1). The study was approved by local ethics committee and all patients gave an informed consent. Images were evaluated by two radiologists with different experience in MRI. DTI images were analyzed qualitatively using dedicated software. Also 2D and 3D T2 images were independently considered. Results: 3D-DTI allowed description of the entire plexus of the periprostatic nerve fibers in all directions, while 2D and 3D T2 morphological sequences depicted part of the fibers, in a plane by plane analysis of fiber courses. DTI demonstrated in all patients the dispersion of nerve fibers around the prostate on both sides including the significant percentage present in the anterior and anterolateral sectors. Conclusions: DTI offers optimal representation of the widely distributed periprostatic plexus. If validated, it may help guide nerve-sparing radical prostatectomy

  4. Real-time auto-stereoscopic visualization of 3D medical images

    Science.gov (United States)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  5. Online reconstruction of 3D magnetic particle imaging data

    Science.gov (United States)

    Knopp, T.; Hofmann, M.

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s‑1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.

  6. Building Extraction from DSM Acquired by Airborne 3D Image

    Institute of Scientific and Technical Information of China (English)

    YOU Hongjian; LI Shukai

    2003-01-01

    Segmentation and edge regulation are studied deeply to extract buildings from DSM data produced in this paper. Building segmentation is the first step to extract buildings, and a new segmentation method-adaptive iterative segmentation considering ratio mean square-is proposed to extract the contour of buildings effectively. A sub-image (such as 50× 50 pixels )of the image is processed in sequence,the average gray level and its ratio mean square are calculated first, then threshold of the sub-image is selected by using iterative threshold segmentation. The current pixel is segmented according to the threshold, the aver-age gray level and the ratio mean square of the sub-image. The edge points of the building are grouped according to the azimuth of neighbor points, and then the optimal azimuth of the points that belong to the same group can be calculated by using line interpolation.

  7. Critical Comparison of 3-d Imaging Approaches for NGST

    OpenAIRE

    Bennett, Charles L.

    1999-01-01

    Currently three imaging spectrometer architectures, tunable filter, dispersive, and Fourier transform, are viable for imaging the universe in three dimensions. There are domains of greatest utility for each of these architectures. The optimum choice among the various alternative architectures is dependent on the nature of the desired observations, the maturity of the relevant technology, and the character of the backgrounds. The domain appropriate for each of the alternatives is delineated; b...

  8. 3D pulmonary airway color image reconstruction via shape from shading and virtual bronchoscopy imaging techniques

    Science.gov (United States)

    Suter, Melissa; Reinhardt, Joseph M.; Hoffman, Eric A.; McLennan, Geoffrey

    2005-04-01

    The dependence on macro-optical imaging of the human body in the assessment of possible disease is rapidly increasing concurrent with, and as a direct result of, advancements made in medical imaging technologies. Assessing the pulmonary airways through bronchoscopy is performed extensively in clinical practice however remains highly subjective due to limited visualization techniques and the lack of quantitative analyses. The representation of 3D structures in 2D visualization modes, although providing an insight to the structural content of the scene, may in fact skew the perception of the structural form. We have developed two methods for visualizing the optically derived airway mucosal features whilst preserving the structural scene integrity. Shape from shading (SFS) techniques can be used to extract 3D structural information from 2D optical images. The SFS technique presented addresses many limitations previously encountered in conventional techniques resulting in high-resolution 3D color images. The second method presented to combine both color and structural information relies on combined CT and bronchoscopy imaging modalities. External imaging techniques such as CT provide a means of determining the gross structural anatomy of the pulmonary airways, however lack the important optically derived mucosal color. Virtual bronchoscopy is used to provide a direct link between the CT derived structural anatomy and the macro-optically derived mucosal color. Through utilization of a virtual and true bronchoscopy matching technique we are able to directly extract combined structurally sound 3D color segments of the pulmonary airways. Various pulmonary airway diseases are assessed and the resulting combined color and texture results are presented demonstrating the effectiveness of the presented techniques.

  9. 3D Surface Imaging of the Human Female Torso in Upright to Supine Positions

    OpenAIRE

    Reece, Gregory P.; Merchant, Fatima; Andon, Johnny; Khatam, Hamed; Ravi-Chandar, K.; Weston, June; Fingeret, Michelle C.; Lane, Chris; Duncan, Kelly; Markey, Mia K.

    2015-01-01

    Three-dimensional (3D) surface imaging of breasts is usually done with the patient in an upright position, which does not permit comparison of changes in breast morphology with changes in position of the torso. In theory, these limitations may be eliminated if the 3D camera system could remain fixed relative to the woman’s torso as she is tilted from 0 to 90 degrees. We mounted a 3dMDtorso imaging system onto a bariatric tilt table to image breasts at different tilt angles. The images were va...

  10. First images and orientation of internal waves from a 3-D seismic oceanography data set

    Directory of Open Access Journals (Sweden)

    T. M. Blacic

    2009-10-01

    Full Text Available We present 3-D images of ocean finestructure from a unique industry-collected 3-D multichannel seismic dataset from the Gulf of Mexico that includes expendable bathythermograpgh casts for both swaths. 2-D processing reveals strong laterally continuous reflectors throughout the upper ~800 m as well as a few weaker but still distinct reflectors as deep as ~1100 m. Two bright reflections are traced across the 225-m-wide swath to produce reflector surface images that show the 3-D structure of internal waves. We show that the orientation of internal wave crests can be obtained by calculating the orientations of contours of reflector relief. Preliminary 3-D processing further illustrates the potential of 3-D seismic data in interpreting images of oceanic features such as internal wave strains. This work demonstrates the viability of imaging oceanic finestructure in 3-D and shows that, beyond simply providing a way to see what oceanic finestructure looks like, quantitative information such as the spatial orientation of features like internal waves and solitons can be obtained from 3-D seismic images. We expect complete, optimized 3-D processing to improve both the signal to noise ratio and spatial resolution of our images resulting in increased options for analysis and interpretation.

  11. Contactless operating table control based on 3D image processing.

    Science.gov (United States)

    Schröder, Stephan; Loftfield, Nina; Langmann, Benjamin; Frank, Klaus; Reithmeier, Eduard

    2014-01-01

    Interaction with mobile consumer devices leads to a higher acceptance and affinity of persons to natural user interfaces and perceptional interaction possibilities. New interaction modalities become accessible and are capable to improve human machine interaction even in complex and high risk environments, like the operation room. Here, manifold medical disciplines cause a great variety of procedures and thus staff and equipment. One universal challenge is to meet the sterility requirements, for which common contact-afflicted remote interfaces always pose a potential risk causing a hazard for the process. The proposed operating table control system overcomes this process risk and thus improves the system usability significantly. The 3D sensor system, the Microsoft Kinect, captures the motion of the user, allowing a touchless manipulation of an operating table. Three gestures enable the user to select, activate and manipulate all segments of the motorised system in a safe and intuitive way. The gesture dynamics are synchronised with the table movement. In a usability study, 15 participants evaluated the system with a system usability score by Broke of 79. This states a high potential for implementation and acceptance in interventional environments. In the near future, even processes with higher risks could be controlled with the proposed interface, while interfaces become safer and more direct. PMID:25569978

  12. Contactless operating table control based on 3D image processing.

    Science.gov (United States)

    Schröder, Stephan; Loftfield, Nina; Langmann, Benjamin; Frank, Klaus; Reithmeier, Eduard

    2014-01-01

    Interaction with mobile consumer devices leads to a higher acceptance and affinity of persons to natural user interfaces and perceptional interaction possibilities. New interaction modalities become accessible and are capable to improve human machine interaction even in complex and high risk environments, like the operation room. Here, manifold medical disciplines cause a great variety of procedures and thus staff and equipment. One universal challenge is to meet the sterility requirements, for which common contact-afflicted remote interfaces always pose a potential risk causing a hazard for the process. The proposed operating table control system overcomes this process risk and thus improves the system usability significantly. The 3D sensor system, the Microsoft Kinect, captures the motion of the user, allowing a touchless manipulation of an operating table. Three gestures enable the user to select, activate and manipulate all segments of the motorised system in a safe and intuitive way. The gesture dynamics are synchronised with the table movement. In a usability study, 15 participants evaluated the system with a system usability score by Broke of 79. This states a high potential for implementation and acceptance in interventional environments. In the near future, even processes with higher risks could be controlled with the proposed interface, while interfaces become safer and more direct.

  13. Space Radar Image Isla Isabela in 3-D

    Science.gov (United States)

    1999-01-01

    This is a three-dimensional view of Isabela, one of the Galapagos Islands located off the western coast of Ecuador, South America. This view was constructed by overlaying a Spaceborne Imaging Radar-C/X-band Synthetic Aperture Radar (SIR-C/X-SAR) image on a digital elevation map produced by TOPSAR, a prototype airborne interferometric radar which produces simultaneous image and elevation data. The vertical scale in this image is exaggerated by a factor of 1.87. The SIR-C/X-SAR image was taken on the 40th orbit of space shuttle Endeavour. The image is centered at about 0.5 degree south latitude and 91 degrees west longitude and covers an area of 75 by 60 kilometers (47 by 37 miles). The radar incidence angle at the center of the image is about 20 degrees. The western Galapagos Islands, which lie about 1,200 kilometers (750 miles)west of Ecuador in the eastern Pacific, have six active volcanoes similar to the volcanoes found in Hawaii and reflect the volcanic processes that occur where the ocean floor is created. Since the time of Charles Darwin's visit to the area in 1835, there have been more than 60 recorded eruptions on these volcanoes. This SIR-C/X-SAR image of Alcedo and Sierra Negra volcanoes shows the rougher lava flows as bright features, while ash deposits and smooth pahoehoe lava flows appear dark. Vertical exaggeration of relief is a common tool scientists use to detect relationships between structure (for example, faults, and fractures) and topography. Spaceborne Imaging Radar-C and X-Synthetic Aperture Radar (SIR-C/X-SAR) is part of NASA's Mission to Planet Earth. The radars illuminate Earth with microwaves allowing detailed observations at any time, regardless of weather or sunlight conditions. SIR-C/X-SAR uses three microwave wavelengths: L-band (24 cm), C-band (6 cm) and X-band (3 cm). The multi-frequency data will be used by the international scientific community to better understand the global environment and how it is changing. The SIR-C/X-SAR data

  14. Real-time 3D Fourier-domain optical coherence tomography guided microvascular anastomosis

    Science.gov (United States)

    Huang, Yong; Ibrahim, Zuhaib; Lee, W. P. A.; Brandacher, Gerald; Kang, Jin U.

    2013-03-01

    Vascular and microvascular anastomosis is considered to be the foundation of plastic and reconstructive surgery, hand surgery, transplant surgery, vascular surgery and cardiac surgery. In the last two decades innovative techniques, such as vascular coupling devices, thermo-reversible poloxamers and suture-less cuff have been introduced. Intra-operative surgical guidance using a surgical imaging modality that provides in-depth view and 3D imaging can improve outcome following both conventional and innovative anastomosis techniques. Optical coherence tomography (OCT) is a noninvasive high-resolution (micron level), high-speed, 3D imaging modality that has been adopted widely in biomedical and clinical applications. In this work we performed a proof-of-concept evaluation study of OCT as an assisted intraoperative and post-operative imaging modality for microvascular anastomosis of rodent femoral vessels. The OCT imaging modality provided lateral resolution of 12 μm and 3.0 μm axial resolution in air and 0.27 volume/s imaging speed, which could provide the surgeon with clearly visualized vessel lumen wall and suture needle position relative to the vessel during intraoperative imaging. Graphics processing unit (GPU) accelerated phase-resolved Doppler OCT (PRDOCT) imaging of the surgical site was performed as a post-operative evaluation of the anastomosed vessels and to visualize the blood flow and thrombus formation. This information could help surgeons improve surgical precision in this highly challenging anastomosis of rodent vessels with diameter less than 0.5 mm. Our imaging modality could not only detect accidental suture through the back wall of lumen but also promptly diagnose and predict thrombosis immediately after reperfusion. Hence, real-time OCT can assist in decision-making process intra-operatively and avoid post-operative complications.

  15. Radar Imaging of Spheres in 3D using MUSIC

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, D H; Berryman, J G

    2003-01-21

    We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between direct and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.

  16. Multithreaded real-time 3D image processing software architecture and implementation

    Science.gov (United States)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  17. Three dimensional (3d) transverse oscillation vector velocity ultrasound imaging

    DEFF Research Database (Denmark)

    2013-01-01

    An ultrasound imaging system (300) includes a transducer array (302) with a two- dimensional array of transducer elements configured to transmit an ultrasound signal and receive echoes, transmit circuitry (304) configured to control the transducer array to transmit the ultrasound signal so...... and the same received set of two dimensional echoes form part of the imaging system...... as to traverse a field of view, and receive circuitry (306) configured to receive a two dimensional set of echoes produced in response to the ultrasound signal traversing structure in the field of view, wherein the structure includes flowing structures such as flowing blood cells, organ cells etc. A beamformer...

  18. 3-D capacitance density imaging of fluidized bed

    Science.gov (United States)

    Fasching, George E.

    1990-01-01

    A three-dimensional capacitance density imaging of a gasified bed or the like in a containment vessel is achieved using a plurality of electrodes provided circumferentially about the bed in levels and along the bed in channels. The electrodes are individually and selectively excited electrically at each level to produce a plurality of current flux field patterns generated in the bed at each level. The current flux field patterns are suitably sensed and a density pattern of the bed at each level determined. By combining the determined density patterns at each level, a three-dimensional density image of the bed is achieved.

  19. 3D-CT imaging processing for qualitative and quantitative analysis of maxillofacial cysts and tumors

    International Nuclear Information System (INIS)

    The objective of this study was to evaluate spiral-computed tomography (3D-CT) images of 20 patients presenting with cysts and tumors in the maxillofacial complex, in order to compare the surface and volume techniques of image rendering. The qualitative and quantitative appraisal indicated that the volume technique allowed a more precise and accurate observation than the surface method. On the average, the measurements obtained by means of the 3D volume-rendering technique were 6.28% higher than those obtained by means of the surface method. The sensitivity of the 3D surface technique was lower than that of the 3D volume technique for all conditions stipulated in the diagnosis and evaluation of lesions. We concluded that the 3D-CT volume rendering technique was more reproducible and sensitive than the 3D-CT surface method, in the diagnosis, treatment planning and evaluation of maxillofacial lesions, especially those with intra-osseous involvement. (author)

  20. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    Science.gov (United States)

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  1. AN IMAGE-BASED TECHNIQUE FOR 3D BUILDING RECONSTRUCTION USING MULTI-VIEW UAV IMAGES

    Directory of Open Access Journals (Sweden)

    F. Alidoost

    2015-12-01

    Full Text Available Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  2. 3D spectral imaging system for anterior chamber metrology

    Science.gov (United States)

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 μm pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1μm with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  3. Hybrid Method for 3D Segmentation of Magnetic Resonance Images

    Institute of Scientific and Technical Information of China (English)

    ZHANGXiang; ZHANGDazhi; TIANJinwen; LIUJian

    2003-01-01

    Segmentation of some complex images, especially in magnetic resonance brain images, is often difficult to perform satisfactory results using only single approach of image segmentation. An approach towards the integration of several techniques seems to be the best solution. In this paper a new hybrid method for 3-dimension segmentation of the whole brain is introduced, based on fuzzy region growing, edge detection and mathematical morphology, The gray-level threshold, controlling the process of region growing, is determined by fuzzy technique. The image gradient feature is obtained by the 3-dimension sobel operator considering a 3×3×3 data block with the voxel to be evaluated at the center, while the gradient magnitude threshold is defined by the gradient magnitude histogram of brain magnetic resonance volume. By the combined methods of edge detection and region growing, the white matter volume of human brain is segmented perfectly. By the post-processing using mathematical morphological techniques, the whole brain region is obtained. In order to investigate the validity of the hybrid method, two comparative experiments, the region growing method using only gray-level feature and the thresholding method by combining gray-level and gradient features, are carried out. Experimental results indicate that the proposed method provides much better results than the traditional method using a single technique in the 3-dimension segmentation of human brain magnetic resonance data sets.

  4. Space Radar Image of Kilauea, Hawaii in 3-D

    Science.gov (United States)

    1999-01-01

    This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is

  5. Task-specific evaluation of 3D image interpolation techniques

    Science.gov (United States)

    Grevera, George J.; Udupa, Jayaram K.; Miki, Yukio

    1998-06-01

    Image interpolation is an important operation that is widely used in medical imaging, image processing, and computer graphics. A variety of interpolation methods are available in the literature. However, their systematic evaluation is lacking. At a previous meeting, we presented a framework for the task independent comparison of interpolation methods based on a variety of medical image data pertaining to different parts of the human body taken from different modalities. In this new work, we present an objective, task-specific framework for evaluating interpolation techniques. The task considered is how the interpolation methods influence the accuracy of quantification of the total volume of lesions in the brain of Multiple Sclerosis (MS) patients. Sixty lesion detection experiments coming from ten patient studies, two subsampling techniques and the original data, and 3 interpolation methods is presented along with a statistical analysis of the results. This work comprises a systematic framework for the task-specific comparison of interpolation methods. Specifically, the influence of three interpolation methods in MS lesion quantification is compared.

  6. Space Radar Image of Kilauea, Hawaii in 3-D

    Science.gov (United States)

    1999-01-01

    This is a three-dimensional perspective view of a false-color image of the eastern part of the Big Island of Hawaii. It was produced using all three radar frequencies -- X-band, C-band and L-band -- from the Spaceborne Imaging Radar-C/X-Band Synthetic Aperture Radar (SIR-C/X-SAR) flying on the space shuttle Endeavour, overlaid on a U.S. Geological Survey digital elevation map. Visible in the center of the image in blue are the summit crater (Kilauea Caldera) which contains the smaller Halemaumau Crater, and the line of collapse craters below them that form the Chain of Craters Road. The image was acquired on April 12, 1994 during orbit 52 of the space shuttle. The area shown is approximately 34 by 57 kilometers (21 by 35 miles) with the top of the image pointing toward northwest. The image is centered at about 155.25 degrees west longitude and 19.5 degrees north latitude. The false colors are created by displaying three radar channels of different frequency. Red areas correspond to high backscatter at L-HV polarization, while green areas exhibit high backscatter at C-HV polarization. Finally, blue shows high return at X-VV polarization. Using this color scheme, the rain forest appears bright on the image, while the green areas correspond to lower vegetation. The lava flows have different colors depending on their types and are easily recognizable due to their shapes. The flows at the top of the image originated from the Mauna Loa volcano. Kilauea volcano has been almost continuously active for more than the last 11 years. Field teams that were on the ground specifically to support these radar observations report that there was vigorous surface activity about 400 meters (one-quartermile) inland from the coast. A moving lava flow about 200 meters (650 feet) in length was observed at the time of the shuttle overflight, raising the possibility that subsequent images taken during this mission will show changes in the landscape. Currently, most of the lava that is

  7. Detection of tibial condylar fractures using 3D imaging with a mobile image amplifier (Siemens ISO-C-3D): Comparison with plain films and spiral CT

    International Nuclear Information System (INIS)

    Purpose: To analyze a prototype mobile C-arm 3D image amplifier in the detection and classification of experimental tibial condylar fractures with multiplanar reconstructions (MPR). Method: Human knee specimens (n=22) with tibial condylar fractures were examined with a prototype C-arm (ISO-C-3D, Siemens AG), plain films (CR) and spiral CT (CT). The motorized C-arm provides fluoroscopic images during a 190 orbital rotation computing a 119 mm data cube. From these 3D data sets MP reconstructions were obtained. All images were evaluated by four independent readers for the detection and assessment of fracture lines. All fractures were classified according to the Mueller AO classification. To confirm the results, the specimens were finally surgically dissected. Results: 97% of the tibial condylar fractures were easily seen and correctly classified according to the Mueller AO classification on MP reconstruction of the ISO-C-3D. There is no significant difference between ISO-C and CT in detection and correct classification of fractures, but ISO-CD-3D is significant by better than CR. (orig.)

  8. 3D lidar imaging for detecting and understanding plant responses and canopy structure.

    Science.gov (United States)

    Omasa, Kenji; Hosoi, Fumiki; Konishi, Atsumi

    2007-01-01

    Understanding and diagnosing plant responses to stress will benefit greatly from three-dimensional (3D) measurement and analysis of plant properties because plant responses are strongly related to their 3D structures. Light detection and ranging (lidar) has recently emerged as a powerful tool for direct 3D measurement of plant structure. Here the use of 3D lidar imaging to estimate plant properties such as canopy height, canopy structure, carbon stock, and species is demonstrated, and plant growth and shape responses are assessed by reviewing the development of lidar systems and their applications from the leaf level to canopy remote sensing. In addition, the recent creation of accurate 3D lidar images combined with natural colour, chlorophyll fluorescence, photochemical reflectance index, and leaf temperature images is demonstrated, thereby providing information on responses of pigments, photosynthesis, transpiration, stomatal opening, and shape to environmental stresses; these data can be integrated with 3D images of the plants using computer graphics techniques. Future lidar applications that provide more accurate dynamic estimation of various plant properties should improve our understanding of plant responses to stress and of interactions between plants and their environment. Moreover, combining 3D lidar with other passive and active imaging techniques will potentially improve the accuracy of airborne and satellite remote sensing, and make it possible to analyse 3D information on ecophysiological responses and levels of various substances in agricultural and ecological applications and in observations of the global biosphere. PMID:17030540

  9. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    OpenAIRE

    S. P. Singh; K. Jain; V. R. Mandla

    2014-01-01

    3D city model is a digital representation of the Earth's surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based m...

  10. Understanding Immersivity: Image Generation and Transformation Processes in 3D Immersive Environments

    OpenAIRE

    Kozhevnikov, Maria; Dhond, Rupali P.

    2012-01-01

    Most research on three-dimensional (3D) visual-spatial processing has been conducted using traditional non-immersive 2D displays. Here we investigated how individuals generate and transform mental images within 3D immersive (3DI) virtual environments, in which the viewers perceive themselves as being surrounded by a 3D world. In Experiment 1, we compared participants’ performance on the Shepard and Metzler (1971) mental rotation (MR) task across the following three types of visual presentatio...

  11. Understanding immersivity: Image generation and transformation processes in 3D immersive environments

    OpenAIRE

    Maria eKozhevnikov; Dhond, Rupali P.

    2012-01-01

    Most research on three-dimensional (3D) visual-spatial processing has been conducted using traditional non-immersive 2D displays. Here we investigated how individuals generate and transform mental images within 3D immersive virtual environments, in which the viewers perceive themselves as being surrounded by a 3D world. In Experiment 1, we compared participants’ performance on the Shepard & Metzler (1971) mental rotation task across the following three types of visual presentation enviro...

  12. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves

    Directory of Open Access Journals (Sweden)

    Yingzhi Kan

    2016-09-01

    Full Text Available In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D imaging is proposed that uses a two-dimensional (2-D plane antenna array. First, a two-dimensional fast Fourier transform (FFT is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT combined with 2-D inverse FFT (IFFT is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements.

  13. How accurate are the fusion of cone-beam CT and 3-D stereophotographic images?

    Directory of Open Access Journals (Sweden)

    Yasas S N Jayaratne

    Full Text Available BACKGROUND: Cone-beam Computed Tomography (CBCT and stereophotography are two of the latest imaging modalities available for three-dimensional (3-D visualization of craniofacial structures. However, CBCT provides only limited information on surface texture. This can be overcome by combining the bone images derived from CBCT with 3-D photographs. The objectives of this study were 1 to evaluate the feasibility of integrating 3-D Photos and CBCT images 2 to assess degree of error that may occur during the above processes and 3 to identify facial regions that would be most appropriate for 3-D image registration. METHODOLOGY: CBCT scans and stereophotographic images from 29 patients were used for this study. Two 3-D images corresponding to the skin and bone were extracted from the CBCT data. The 3-D photo was superimposed on the CBCT skin image using relatively immobile areas of the face as a reference. 3-D colour maps were used to assess the accuracy of superimposition were distance differences between the CBCT and 3-D photo were recorded as the signed average and the Root Mean Square (RMS error. PRINCIPAL FINDINGS: The signed average and RMS of the distance differences between the registered surfaces were -0.018 (±0.129 mm and 0.739 (±0.239 mm respectively. The most errors were found in areas surrounding the lips and the eyes, while minimal errors were noted in the forehead, root of the nose and zygoma. CONCLUSIONS: CBCT and 3-D photographic data can be successfully fused with minimal errors. When compared to RMS, the signed average was found to under-represent the registration error. The virtual 3-D composite craniofacial models permit concurrent assessment of bone and soft tissues during diagnosis and treatment planning.

  14. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  15. Recovering 3D tumor locations from 2D bioluminescence images and registration with CT images

    Science.gov (United States)

    Huang, Xiaolei; Metaxas, Dimitris N.; Menon, Lata G.; Mayer-Kuckuk, Philipp; Bertino, Joseph R.; Banerjee, Debabrata

    2006-02-01

    In this paper, we introduce a novel and efficient algorithm for reconstructing the 3D locations of tumor sites from a set of 2D bioluminescence images which are taken by a same camera but after continually rotating the object by a small angle. Our approach requires a much simpler set up than those using multiple cameras, and the algorithmic steps in our framework are efficient and robust enough to facilitate its use in analyzing the repeated imaging of a same animal transplanted with gene marked cells. In order to visualize in 3D the structure of the tumor, we also co-register the BLI-reconstructed crude structure with detailed anatomical structure extracted from high-resolution microCT on a single platform. We present our method using both phantom studies and real studies on small animals.

  16. Determining 3D flow fields via multi-camera light field imaging.

    Science.gov (United States)

    Truscott, Tadd T; Belden, Jesse; Nielson, Joseph R; Daily, David J; Thomson, Scott L

    2013-03-06

    In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture (1). Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet.

  17. Fast 3-D Tomographic Microwave Imaging for Breast Cancer Detection

    OpenAIRE

    Grzegorczyk, Tomasz M.; Meaney, Paul M.; Kaufman, Peter A.; DiFlorio-Alexander, Roberta M.; Paulsen, Keith D.

    2012-01-01

    Microwave breast imaging (using electromagnetic waves of frequencies around 1 GHz) has mostly remained at the research level for the past decade, gaining little clinical acceptance. The major hurdles limiting patient use are both at the hardware level (challenges in collecting accurate and noncorrupted data) and software level (often plagued by unrealistic reconstruction times in the tens of hours). In this paper we report improvements that address both issues. First, the hardware is able to ...

  18. Study of bone implants based on 3D images

    OpenAIRE

    Grau, S; Ayala Vallespí, M. Dolors; Tost Pardell, Daniela; Miño, N.; Muñoz, F.; González, A

    2005-01-01

    New medical input technologies together with computer graphics modelling and visualization software have opened a new track for biomedical sciences: the so-called in-silice experimentation, in which analysis and measurements are done on computer graphics models constructed on the basis of medical images, complementing the traditional in-vivo and in-vitro experimental methods. In this paper, we describe an in-silice experiment to evaluate bio-implants f...

  19. Quantification of the aortic arch morphology in 3D CTA images for endovascular aortic repair (EVAR)

    Science.gov (United States)

    Wörz, S.; von Tengg-Kobligk, H.; Henninger, V.; Böckler, D.; Kauczor, H.-U.; Rohr, K.

    2008-03-01

    We introduce a new model-based approach for the segmentation and quantification of the aortic arch morphology in 3D CTA images for endovascular aortic repair (EVAR). The approach is based on a 3D analytic intensity model for thick vessels, which is directly fitted to the image. Based on the fitting results we compute the (local) 3D vessel curvature and torsion as well as the relevant lengths not only along the 3D centerline but particularly along the inner and outer contour. These measurements are important for pre-operative planning in EVAR applications. We have successfully applied our approach using ten 3D CTA images and have compared the results with ground truth obtained by a radiologist. It turned out that our approach yields accurate estimation results. We have also performed a comparison with a commercial vascular analysis software.

  20. Comparison of 3D Synthetic Aperture Imaging and Explososcan using Phantom Measurements

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer; Férin, Guillaume; Dufait, Rémi;

    2012-01-01

    In this paper, initial 3D ultrasound measurements from a 1024 channel system are presented. Measurements of 3D Synthetic aperture imaging (SAI) and Explososcan are presented and compared. Explososcan is the ’gold standard’ for real-time 3D medical ultrasound imaging. SAI is compared to Explososcan...... by using tissue and wire phantom measurements. The measurements are carried out using a 1024 element 2D transducer and the 1024 channel experimental ultrasound scanner SARUS. To make a fair comparison, the two imaging techniques use the same number of active channels, the same number of emissions per frame...

  1. Algorithm of pulmonary emphysema extraction using low dose thoracic 3D CT images

    Science.gov (United States)

    Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.

    2006-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  2. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    Science.gov (United States)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  3. 3D MHD Simulations of Laser Plasma Guiding in Curved Magnetic Field

    Science.gov (United States)

    Roupassov, S.; Rankin, R.; Tsui, Y.; Capjack, C.; Fedosejevs, R.

    1999-11-01

    The guiding and confinement of laser produced plasma in a curved magnetic field has been investigated numerically. These studies were motivated by experiments on pulsed laser deposition of diamond-like films [1] in which a 1kG magnetic field in a curved solenoid geometry was utilized to steer a carbon plasma around a curved trajectory and thus to separate it from unwanted macroparticles produced by the laser ablation. The purpose of the modeling was to characterize the plasma dynamics during the propagation through the magnetic guide field and to investigate the effect of different magnetic field configurations. A 3D curvilinear ADI code developed on the basis of an existing Cartesian code [2] was employed to simulate the underlying resistive one-fluid MHD model. Issues such as large regions of low background density and nonreflective boundary conditions were addressed. Results of the simulations in a curved guide field will be presented and compared to experimental results. [1] Y.Y. Tsui, D. Vick and R. Fedosejevs, Appl. Phys. Lett. 70 (15), pp. 1953-57, 1997. [2] R. Rankin, and I. Voronkov, in "High Performance Computing Systems and Applications", pp. 59-69, Kluwer AP, 1998.

  4. HERMES Results on the 3D Imaging of the Nucleon

    Science.gov (United States)

    Pappalardo, L. L.

    2016-07-01

    It the last decades, a formalism of transverse momentum dependent parton distribution functions (TMDs) and of generalised parton distributions (GPDs) has been developed in the context of non-perturbative QCD, opening the way for a tomographic imaging of the nucleon structure. TMDs and GPDs provide complementary three-dimensional descriptions of the nucleon structure in terms of parton densities. They thus contribute, with different approaches, to the understanding of the full phase-space distribution of partons. A selection of HERMES results sensitive to TMDs is presented.

  5. 3D Synchrotron Imaging of a Directionally Solidified Ternary Eutectic

    Science.gov (United States)

    Dennstedt, Anne; Helfen, Lukas; Steinmetz, Philipp; Nestler, Britta; Ratke, Lorenz

    2016-03-01

    For the first time, the microstructure of directionally solidified ternary eutectics is visualized in three dimensions, using a high-resolution technique of X-ray tomography at the ESRF. The microstructure characterization is conducted with a photon energy, allowing to clearly discriminate the three phases Ag2Al, Al2Cu, and α-Aluminum solid solution. The reconstructed images illustrate the three-dimensional arrangement of the phases. The Ag2Al lamellae perform splitting and merging as well as nucleation and disappearing events during directional solidification.

  6. An Image Hiding Scheme Using 3D Sawtooth Map and Discrete Wavelet Transform

    OpenAIRE

    Ruisong Ye; Wenping Yu

    2012-01-01

    An image encryption scheme based on the 3D sawtooth map is proposed in this paper. The 3D sawtooth map is utilized to generate chaotic orbits to permute the pixel positions and to generate pseudo-random gray value sequences to change the pixel gray values. The image encryption scheme is then applied to encrypt the secret image which will be imbedded in one host image. The encrypted secret image and the host image are transformed by the wavelet transform and then are merged in the frequency d...

  7. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    CERN Document Server

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  8. A framework for human spine imaging using a freehand 3D ultrasound system

    NARCIS (Netherlands)

    Purnama, Ketut E.; Wilkinson, Michael H.F.; Veldhuizen, Albert G.; Ooijen, van Peter M.A.; Lubbers, Jaap; Burgerhof, Johannes G.M.; Sardjono, Tri A.; Verkerke, Gijsbertus J.

    2010-01-01

    The use of 3D ultrasound imaging to follow the progression of scoliosis, i.e., a 3D deformation of the spine, is described. Unlike other current examination modalities, in particular based on X-ray, its non-detrimental effect enables it to be used frequently to follow the progression of scoliosis wh

  9. Contributions in compression of 3D medical images and 2D images

    International Nuclear Information System (INIS)

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  10. 3D GRASE PROPELLER: Improved Image Acquisition Technique for Arterial Spin Labeling Perfusion Imaging

    Science.gov (United States)

    Tan, Huan; Hoge, W. Scott; Hamilton, Craig A.; Günther, Matthias; Kraft, Robert A.

    2014-01-01

    Arterial spin labeling (ASL) is a non-invasive technique that can quantitatively measure cerebral blood flow (CBF). While traditionally ASL employs 2D EPI or spiral acquisition trajectories, single-shot 3D GRASE is gaining popularity in ASL due to inherent SNR advantage and spatial coverage. However, a major limitation of 3D GRASE is through-plane blurring caused by T2 decay. A novel technique combining 3D GRASE and a PROPELLER trajectory (3DGP) is presented to minimize through-plane blurring without sacrificing perfusion sensitivity or increasing total scan time. Full brain perfusion images were acquired at a 3×3×5mm3 nominal voxel size with Q2TIPS-FAIR as the ASL preparation sequence. Data from 5 healthy subjects was acquired on a GE 1.5T scanner in less than 4 minutes per subject. While showing good agreement in CBF quantification with 3D GRASE, 3DGP demonstrated reduced through-plane blurring, improved anatomical details, high repeatability and robustness against motion, making it suitable for routine clinical use. PMID:21254211

  11. Rail-guided Multi-robot System for 3D Cellular Hydrogel Assembly with Coordinated Nanomanipulation

    Directory of Open Access Journals (Sweden)

    Huaping Wang

    2014-08-01

    Full Text Available The 3D assembly of micro-/nano-building blocks with multi-nanomanipulator coordinated manipulation is one of the central elements of nanomanipulation. A novel rail-guided nanomanipulation system was proposed for the assembly of a cellular vascular-like hydrogel microchannel. The system was equipped with three nanomanipulators and was restricted on the rail in order to realize the arbitrary change of the end-effectors during the assembly. It was set up with hybrid motors to achieve both a large operating space and a 30 nm positional resolution. The 2D components such as the assembly units were fabricated through the encapsulation of cells in the hydrogel. The coordinated manipulation strategies among the multi-nanomanipulators were designed with vision feedback and were demonstrated through the bottom-up assembly of the vascular-like microtube. As a result, the multi-layered microchannel was assembled through the cooperation of the nanomanipulation system.

  12. D3D augmented reality imaging system: proof of concept in mammography

    Directory of Open Access Journals (Sweden)

    Douglas DB

    2016-08-01

    Full Text Available David B Douglas,1 Emanuel F Petricoin,2 Lance Liotta,2 Eugene Wilson3 1Department of Radiology, Stanford University, Palo Alto, CA, 2Center for Applied Proteomics and Molecular Medicine, George Mason University, Manassas, VA, 3Department of Radiology, Fort Benning, Columbus, GA, USA Purpose: The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D augmented reality”. Materials and methods: A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results: The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion: The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. Keywords: augmented reality, 3D medical imaging, radiology, depth perception

  13. Space Radar Image of Death Valley in 3-D

    Science.gov (United States)

    1999-01-01

    This picture is a three-dimensional perspective view of Death Valley, California. This view was constructed by overlaying a SIR-C radar image on a U.S. Geological Survey digital elevation map. The SIR-C image is centered at 36.629 degrees north latitude and 117.069 degrees west longitude. We are looking at Stove Pipe Wells, which is the bright rectangle located in the center of the picture frame. Our vantage point is located atop a large alluvial fan centered at the mouth of Cottonwood Canyon. In the foreground on the left, we can see the sand dunes near Stove Pipe Wells. In the background on the left, the Valley floor gradually falls in elevation toward Badwater, the lowest spot in the United States. In the background on the right we can see Tucki Mountain. This SIR-C/X-SAR supersite is an area of extensive field investigations and has been visited by both Space Radar Lab astronaut crews. Elevations in the Valley range from 70 meters (230 feet) below sea level, the lowest in the United States, to more than 3,300 meters (10,800 feet) above sea level. Scientists are using SIR-C/X-SAR data from Death Valley to help the answer a number of different questions about Earth's geology. One question concerns how alluvial fans are formed and change through time under the influence of climatic changes and earthquakes. Alluvial fans are gravel deposits that wash down from the mountains over time. They are visible in the image as circular, fan-shaped bright areas extending into the darker valley floor from the mountains. Information about the alluvial fans helps scientists study Earth's ancient climate. Scientists know the fans are built up through climatic and tectonic processes and they will use the SIR-C/X-SAR data to understand the nature and rates of weathering processes on the fans, soil formation and the transport of sand and dust by the wind. SIR-C/X-SAR's sensitivity to centimeter-scale (inch-scale) roughness provides detailed maps of surface texture. Such information

  14. Software for browsing sectioned images of a dog body and generating a 3D model.

    Science.gov (United States)

    Park, Jin Seo; Jung, Yong Wook

    2016-01-01

    The goals of this study were (1) to provide accessible and instructive browsing software for sectioned images and a portable document format (PDF) file that includes three-dimensional (3D) models of an entire dog body and (2) to develop techniques for segmentation and 3D modeling that would enable an investigator to perform these tasks without the aid of a computer engineer. To achieve these goals, relatively important or large structures in the sectioned images were outlined to generate segmented images. The sectioned and segmented images were then packaged into browsing software. In this software, structures in the sectioned images are shown in detail and in real color. After 3D models were made from the segmented images, the 3D models were exported into a PDF file. In this format, the 3D models could be manipulated freely. The browsing software and PDF file are available for study by students, for lecture for teachers, and for training for clinicians. These files will be helpful for anatomical study by and clinical training of veterinary students and clinicians. Furthermore, these techniques will be useful for researchers who study two-dimensional images and 3D models.

  15. A real-time noise filtering strategy for photon counting 3D imaging lidar.

    Science.gov (United States)

    Zhang, Zijing; Zhao, Yuan; Zhang, Yong; Wu, Long; Su, Jianzhong

    2013-04-22

    For a direct-detection 3D imaging lidar, the use of Geiger mode avalanche photodiode (Gm-APD) could greatly enhance the detection sensitivity of the lidar system since each range measurement requires a single detected photon. Furthermore, Gm-APD offers significant advantages in reducing the size, mass, power and complexity of the system. However the inevitable noise, including the background noise, the dark count noise and so on, remains a significant challenge to obtain a clear 3D image of the target of interest. This paper presents a smart strategy, which can filter out false alarms in the stage of acquisition of raw time of flight (TOF) data and obtain a clear 3D image in real time. As a result, a clear 3D image is taken from the experimental system despite the background noise of the sunny day. PMID:23609635

  16. A Compact, Wide Area Surveillance 3D Imaging LIDAR Providing UAS Sense and Avoid Capabilities Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Eye safe 3D Imaging LIDARS when combined with advanced very high sensitivity, large format receivers can provide a robust wide area search capability in a very...

  17. Registration of 2D x-ray images to 3D MRI by generating pseudo-CT data

    Science.gov (United States)

    van der Bom, M. J.; Pluim, J. P. W.; Gounis, M. J.; van de Kraats, E. B.; Sprinkhuizen, S. M.; Timmer, J.; Homan, R.; Bartels, L. W.

    2011-02-01

    Spatial and soft tissue information provided by magnetic resonance imaging can be very valuable during image-guided procedures, where usually only real-time two-dimensional (2D) x-ray images are available. Registration of 2D x-ray images to three-dimensional (3D) magnetic resonance imaging (MRI) data, acquired prior to the procedure, can provide optimal information to guide the procedure. However, registering x-ray images to MRI data is not a trivial task because of their fundamental difference in tissue contrast. This paper presents a technique that generates pseudo-computed tomography (CT) data from multi-spectral MRI acquisitions which is sufficiently similar to real CT data to enable registration of x-ray to MRI with comparable accuracy as registration of x-ray to CT. The method is based on a k-nearest-neighbors (kNN)-regression strategy which labels voxels of MRI data with CT Hounsfield Units. The regression method uses multi-spectral MRI intensities and intensity gradients as features to discriminate between various tissue types. The efficacy of using pseudo-CT data for registration of x-ray to MRI was tested on ex vivo animal data. 2D-3D registration experiments using CT and pseudo-CT data of multiple subjects were performed with a commonly used 2D-3D registration algorithm. On average, the median target registration error for registration of two x-ray images to MRI data was approximately 1 mm larger than for x-ray to CT registration. The authors have shown that pseudo-CT data generated from multi-spectral MRI facilitate registration of MRI to x-ray images. From the experiments it could be concluded that the accuracy achieved was comparable to that of registering x-ray images to CT data.

  18. GPU-Based 3D Cone-Beam CT Image Reconstruction for Large Data Volume

    OpenAIRE

    Xing Zhao; Jing-jing Hu; Peng Zhang

    2009-01-01

    Currently, 3D cone-beam CT image reconstruction speed is still a severe limitation for clinical application. The computational power of modern graphics processing units (GPUs) has been harnessed to provide impressive acceleration of 3D volume image reconstruction. For extra large data volume exceeding the physical graphic memory of GPU, a straightforward compromise is to divide data volume into blocks. Different from the conventional Octree partition method, a new partition scheme is proposed...

  19. Fast Susceptibility-Weighted Imaging (SWI) with 3D Short-Axis Propeller (SAP)-EPI

    Science.gov (United States)

    Holdsworth, Samantha J.; Yeom, Kristen W.; Moseley, Michael E.; Skare, S.

    2014-01-01

    Purpose Susceptibility-Weighted Imaging (SWI) in neuroimaging can be challenging due to long scan times of 3D Gradient Recalled Echo (GRE), while faster techniques such as 3D interleaved EPI (iEPI) are prone to motion artifacts. Here we outline and implement a 3D Short-Axis Propeller Echo-Planar Imaging (SAP-EPI) trajectory as a faster, motion-correctable approach for SWI. Methods Experiments were conducted on a 3T MRI system. 3D SAP-EPI, 3D iEPI, and 3D GRE SWI scans were acquired on two volunteers. Controlled motion experiments were conducted to test the motion-correction capability of 3D SAP-EPI. 3D SAP-EPI SWI data were acquired on two pediatric patients as a potential alternative to 2D GRE used clinically. Results 3D GRE images had a better target resolution (0.47 × 0.94 × 2mm, scan time = 5min), iEPI and SAP-EPI images (resolution = 0.94 × 0.94 × 2mm) were acquired in a faster scan time (1:52min) with twice the brain coverage. SAP-EPI showed motion-correction capability and some immunity to undersampling from rejected data. Conclusion While 3D SAP-EPI suffers from some geometric distortion, its short scan time and motion-correction capability suggest that SAP-EPI may be a useful alternative to GRE and iEPI for use in SWI, particularly in uncooperative patients. PMID:24956237

  20. Wide area 2D/3D imaging development, analysis and applications

    CERN Document Server

    Langmann, Benjamin

    2014-01-01

    Imaging technology is an important research area and it is widely utilized in a growing number of disciplines ranging from gaming, robotics and automation to medicine. In the last decade 3D imaging became popular mainly driven by the introduction of novel 3D cameras and measuring devices. These cameras are usually limited to indoor scenes with relatively low distances. Benjamin Langmann introduces medium and long-range 2D/3D cameras to overcome these limitations. He reports measurement results for these devices and studies their characteristic behavior. In order to facilitate the application o

  1. Known-component 3D-2D registration for image guidance and quality assurance in spine surgery pedicle screw placement

    Science.gov (United States)

    Uneri, A.; Stayman, J. W.; De Silva, T.; Wang, A. S.; Kleinszig, G.; Vogt, S.; Khanna, A. J.; Wolinsky, J.-P.; Gokaslan, Z. L.; Siewerdsen, J. H.

    2015-03-01

    Purpose. To extend the functionality of radiographic / fluoroscopic imaging systems already within standard spine surgery workflow to: 1) provide guidance of surgical device analogous to an external tracking system; and 2) provide intraoperative quality assurance (QA) of the surgical product. Methods. Using fast, robust 3D-2D registration in combination with 3D models of known components (surgical devices), the 3D pose determination was solved to relate known components to 2D projection images and 3D preoperative CT in near-real-time. Exact and parametric models of the components were used as input to the algorithm to evaluate the effects of model fidelity. The proposed algorithm employs the covariance matrix adaptation evolution strategy (CMA-ES) to maximize gradient correlation (GC) between measured projections and simulated forward projections of components. Geometric accuracy was evaluated in a spine phantom in terms of target registration error at the tool tip (TREx), and angular deviation (TREΦ) from planned trajectory. Results. Transpedicle surgical devices (probe tool and spine screws) were successfully guided with TREx30° (easily accommodated on a mobile C-arm). QA of the surgical product based on 3D-2D registration demonstrated the detection of pedicle screw breach with TRExConclusions. 3D-2D registration combined with 3D models of known surgical components provides a novel method for near-real-time guidance and quality assurance using a mobile C-arm without external trackers or fiducial markers. Ongoing work includes determination of optimal views based on component shape and trajectory, improved robustness to anatomical deformation, and expanded preclinical testing in spine and intracranial surgeries.

  2. MRI Sequence Images Compression Method Based on Improved 3D SPIHT%基于改进3D SPIHT的MRI序列图像压缩方法

    Institute of Scientific and Technical Information of China (English)

    蒋行国; 李丹; 陈真诚

    2013-01-01

    目的 研究一种有效的MRI序列图像压缩方法.方法 以2组不同数量、不同层厚的MRI序列图像为例,针对3D SPIHT算法运算复杂度,在对D型、L型表项重复判断的不足上,提出了一种改进的3DSPIHT方法;同时,根据MRI序列图像的相关性特点,提出了分组编/解码的方法,结合3D小波变换和应用改进的3D SPIHT方法,实现了MRI序列图像压缩.结果 分组结合改进3D SPIHT方法与2DSPIHT,3D SPIHT相比,能够得到较好重构图像,同时,峰值信噪比(PSNR)提高了1~8 dB左右.结论 在相同码率下,分组结合改进3D SPIHT的方法提高了PSNR和图像恢复质量,可以更好地解决大量MRI序列图像存储与传输问题.%Objective To propose an effective MRI sequence image compression method for solving the storage and transmission problem of large amounts of MRI sequence images. Methods Aimed at alleviating the complexity of computation of 3D Set Partitioning in Hierarchical Trees( SPIHT) algorithm and the deficiency that D or L type table were judged repeatedly, an improved 3 D SPIHT method was presented and two groups of MRI sequence images with different numbers and slice thickness were taken as examples. At the same time, according to the correlation characteristics of MRI sequence images, a method that images were divided into groups and then coded/decoded was put forward in this paper. It combined with 3D wavelet transform and the improved 3D SPIHT method, the MRI sequence image compression was achieved. Results Comparing with the 2D SPIHT and 3D SPIHT methods, the grouping combined with the improved 3D SPIHT method could obtain better reconstructed images and Peak Signal Noise Ratio (PSNR) could be improved by 1 ~ 8 dB as well. Conclusion At the same bit rate, PSNR and image quality of recovery can be improved by the grouping combined with the improved 3D SPIHT method and the storage and transmission problem of large amounts of MRI sequence images can be solved.

  3. Portable high-intensity focused ultrasound system with 3D electronic steering, real-time cavitation monitoring, and 3D image reconstruction algorithms: a preclinical study in pigs

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jin Woo; Lee, Jae Young; Hwang, Eui Jin; Hwang, In Pyeong; Woo, Sung Min; Lee, Chang Joo; Park, Eun Joo; Choi, Byung Ihn [Dept. of Radiology and Institute of Radiation Medicine, Seoul National University Hospital, Seoul (Korea, Republic of)

    2014-10-15

    The aim of this study was to evaluate the safety and accuracy of a new portable ultrasonography-guided high-intensity focused ultrasound (USg-HIFU) system with a 3-dimensional (3D) electronic steering transducer, a simultaneous ablation and imaging module, real-time cavitation monitoring, and 3D image reconstruction algorithms. To address the accuracy of the transducer, hydrophones in a water chamber were used to assess the generation of sonic fields. An animal study was also performed in five pigs by ablating in vivo thighs by single-point sonication (n=10) or volume sonication (n=10) and ex vivo kidneys by single-point sonication (n=10). Histological and statistical analyses were performed. In the hydrophone study, peak voltages were detected within 1.0 mm from the targets on the y- and z-axes and within 2.0-mm intervals along the x-axis (z-axis, direction of ultrasound propagation; y- and x-axes, perpendicular to the direction of ultrasound propagation). Twenty-nine of 30 HIFU sessions successfully created ablations at the target. The in vivo porcine thigh study showed only a small discrepancy (width, 0.5-1.1 mm; length, 3.0 mm) between the planning ultrasonograms and the pathological specimens. Inordinate thermal damage was not observed in the adjacent tissues or sonic pathways in the in vivo thigh and ex vivo kidney studies. Our study suggests that this new USg-HIFU system may be a safe and accurate technique for ablating soft tissues and encapsulated organs.

  4. 3D reconstruction of SEM images by use of optical photogrammetry software.

    Science.gov (United States)

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching.

  5. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    Science.gov (United States)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  6. Segmentation of vertebral bodies in CT and MR images based on 3D deterministic models

    Science.gov (United States)

    Štern, Darko; Vrtovec, Tomaž; Pernuš, Franjo; Likar, Boštjan

    2011-03-01

    The evaluation of vertebral deformations is of great importance in clinical diagnostics and therapy of pathological conditions affecting the spine. Although modern clinical practice is oriented towards the computed tomography (CT) and magnetic resonance (MR) imaging techniques, as they can provide a detailed 3D representation of vertebrae, the established methods for the evaluation of vertebral deformations still provide only a two-dimensional (2D) geometrical description. Segmentation of vertebrae in 3D may therefore not only improve their visualization, but also provide reliable and accurate 3D measurements of vertebral deformations. In this paper we propose a method for 3D segmentation of individual vertebral bodies that can be performed in CT and MR images. Initialized with a single point inside the vertebral body, the segmentation is performed by optimizing the parameters of a 3D deterministic model of the vertebral body to achieve the best match of the model to the vertebral body in the image. The performance of the proposed method was evaluated on five CT (40 vertebrae) and five T2-weighted MR (40 vertebrae) spine images, among them five are normal and five are pathological. The results show that the proposed method can be used for 3D segmentation of vertebral bodies in CT and MR images and that the proposed model can describe a variety of vertebral body shapes. The method may be therefore used for initializing whole vertebra segmentation or reliably describing vertebral body deformations.

  7. Simultaneous whole-animal 3D-imaging of neuronal activity using light field microscopy

    CERN Document Server

    Prevedel, R; Hoffmann, M; Pak, N; Wetzstein, G; Kato, S; Schrödel, T; Raskar, R; Zimmer, M; Boyden, E S; Vaziri, A

    2014-01-01

    3D functional imaging of neuronal activity in entire organisms at single cell level and physiologically relevant time scales faces major obstacles due to trade-offs between the size of the imaged volumes, and spatial and temporal resolution. Here, using light-field microscopy in combination with 3D deconvolution, we demonstrate intrinsically simultaneous volumetric functional imaging of neuronal population activity at single neuron resolution for an entire organism, the nematode Caenorhabditis elegans. The simplicity of our technique and possibility of the integration into epi-fluoresence microscopes makes it an attractive tool for high-speed volumetric calcium imaging.

  8. SEGMENTATION OF UAV-BASED IMAGES INCORPORATING 3D POINT CLOUD INFORMATION

    Directory of Open Access Journals (Sweden)

    A. Vetrivel

    2015-03-01

    Full Text Available Numerous applications related to urban scene analysis demand automatic recognition of buildings and distinct sub-elements. For example, if LiDAR data is available, only 3D information could be leveraged for the segmentation. However, this poses several risks, for instance, the in-plane objects cannot be distinguished from their surroundings. On the other hand, if only image based segmentation is performed, the geometric features (e.g., normal orientation, planarity are not readily available. This renders the task of detecting the distinct sub-elements of the building with similar radiometric characteristic infeasible. In this paper the individual sub-elements of buildings are recognized through sub-segmentation of the building using geometric and radiometric characteristics jointly. 3D points generated from Unmanned Aerial Vehicle (UAV images are used for inferring the geometric characteristics of roofs and facades of the building. However, the image-based 3D points are noisy, error prone and often contain gaps. Hence the segmentation in 3D space is not appropriate. Therefore, we propose to perform segmentation in image space using geometric features from the 3D point cloud along with the radiometric features. The initial detection of buildings in 3D point cloud is followed by the segmentation in image space using the region growing approach by utilizing various radiometric and 3D point cloud features. The developed method was tested using two data sets obtained with UAV images with a ground resolution of around 1-2 cm. The developed method accurately segmented most of the building elements when compared to the plane-based segmentation using 3D point cloud alone.

  9. Automated 3D ultrasound image segmentation to aid breast cancer image interpretation.

    Science.gov (United States)

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A; Yuan, Jie; Wang, Xueding; Carson, Paul L

    2016-02-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  10. An Image Hiding Scheme Using 3D Sawtooth Map and Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Ruisong Ye

    2012-07-01

    Full Text Available An image encryption scheme based on the 3D sawtooth map is proposed in this paper. The 3D sawtooth map is utilized to generate chaotic orbits to permute the pixel positions and to generate pseudo-random gray value sequences to change the pixel gray values. The image encryption scheme is then applied to encrypt the secret image which will be imbedded in one host image. The encrypted secret image and the host image are transformed by the wavelet transform and then are merged in the frequency domain. Experimental results show that the stego-image looks visually identical to the original host one and the secret image can be effectively extracted upon image processing attacks, which demonstrates strong robustness against a variety of attacks.

  11. In vivo 3D PIXE-micron-CT imaging of Drosophila melanogaster using a contrast agent

    Energy Technology Data Exchange (ETDEWEB)

    Matsuyama, Shigeo; Hamada, Naoki; Ishii, Keizo; Nozawa, Yuichiro; Ohkura, Satoru; Terakawa, Atsuki; Hatori, Yoshinobu; Fujiki, Kota; Fujiwara, Mitsuhiro; Toyama, Sho

    2015-04-01

    In this study, we developed a three-dimensional (3D) computed tomography (CT) in vivo imaging system for imaging small insects with micrometer resolution. The 3D CT imaging system, referred to as 3D PIXE-micron-CT (PIXEμCT), uses characteristic X-rays produced by ion microbeam bombardment of a metal target. PIXEμCT was used to observe the body organs and internal structure of a living Drosophila melanogaster. Although the organs of the thorax were clearly imaged, the digestive organs in the abdominal cavity could not be clearly discerned initially, with the exception of the rectum and the Malpighian tubule. To enhance the abdominal images, a barium sulfate powder radiocontrast agent was added. For the first time, 3D images of the ventriculus of a living D. melanogaster were obtained. Our results showed that PIXEμCT can provide in vivo 3D-CT images that reflect correctly the structure of individual living organs, which is expected to be very useful in biological research.

  12. Technical Note: Characterization of custom 3D printed multimodality imaging phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Bieniosek, Matthew F. [Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, California 94305 (United States); Lee, Brian J. [Department of Mechanical Engineering, Stanford University, 440 Escondido Mall, Stanford, California 94305 (United States); Levin, Craig S., E-mail: cslevin@stanford.edu [Departments of Radiology, Physics, Bioengineering and Electrical Engineering, Stanford University, 300 Pasteur Dr., Stanford, California 94305-5128 (United States)

    2015-10-15

    Purpose: Imaging phantoms are important tools for researchers and technicians, but they can be costly and difficult to customize. Three dimensional (3D) printing is a widely available rapid prototyping technique that enables the fabrication of objects with 3D computer generated geometries. It is ideal for quickly producing customized, low cost, multimodal, reusable imaging phantoms. This work validates the use of 3D printed phantoms by comparing CT and PET scans of a 3D printed phantom and a commercial “Micro Deluxe” phantom. This report also presents results from a customized 3D printed PET/MRI phantom, and a customized high resolution imaging phantom with sub-mm features. Methods: CT and PET scans of a 3D printed phantom and a commercial Micro Deluxe (Data Spectrum Corporation, USA) phantom with 1.2, 1.6, 2.4, 3.2, 4.0, and 4.8 mm diameter hot rods were acquired. The measured PET and CT rod sizes, activities, and attenuation coefficients were compared. A PET/MRI scan of a custom 3D printed phantom with hot and cold rods was performed, with photon attenuation and normalization measurements performed with a separate 3D printed normalization phantom. X-ray transmission scans of a customized two level high resolution 3D printed phantom with sub-mm features were also performed. Results: Results show very good agreement between commercial and 3D printed micro deluxe phantoms with less than 3% difference in CT measured rod diameter, less than 5% difference in PET measured rod diameter, and a maximum of 6.2% difference in average rod activity from a 10 min, 333 kBq/ml (9 μCi/ml) Siemens Inveon (Siemens Healthcare, Germany) PET scan. In all cases, these differences were within the measurement uncertainties of our setups. PET/MRI scans successfully identified 3D printed hot and cold rods on PET and MRI modalities. X-ray projection images of a 3D printed high resolution phantom identified features as small as 350 μm wide. Conclusions: This work shows that 3D printed

  13. Technical Note: Characterization of custom 3D printed multimodality imaging phantoms

    International Nuclear Information System (INIS)

    Purpose: Imaging phantoms are important tools for researchers and technicians, but they can be costly and difficult to customize. Three dimensional (3D) printing is a widely available rapid prototyping technique that enables the fabrication of objects with 3D computer generated geometries. It is ideal for quickly producing customized, low cost, multimodal, reusable imaging phantoms. This work validates the use of 3D printed phantoms by comparing CT and PET scans of a 3D printed phantom and a commercial “Micro Deluxe” phantom. This report also presents results from a customized 3D printed PET/MRI phantom, and a customized high resolution imaging phantom with sub-mm features. Methods: CT and PET scans of a 3D printed phantom and a commercial Micro Deluxe (Data Spectrum Corporation, USA) phantom with 1.2, 1.6, 2.4, 3.2, 4.0, and 4.8 mm diameter hot rods were acquired. The measured PET and CT rod sizes, activities, and attenuation coefficients were compared. A PET/MRI scan of a custom 3D printed phantom with hot and cold rods was performed, with photon attenuation and normalization measurements performed with a separate 3D printed normalization phantom. X-ray transmission scans of a customized two level high resolution 3D printed phantom with sub-mm features were also performed. Results: Results show very good agreement between commercial and 3D printed micro deluxe phantoms with less than 3% difference in CT measured rod diameter, less than 5% difference in PET measured rod diameter, and a maximum of 6.2% difference in average rod activity from a 10 min, 333 kBq/ml (9 μCi/ml) Siemens Inveon (Siemens Healthcare, Germany) PET scan. In all cases, these differences were within the measurement uncertainties of our setups. PET/MRI scans successfully identified 3D printed hot and cold rods on PET and MRI modalities. X-ray projection images of a 3D printed high resolution phantom identified features as small as 350 μm wide. Conclusions: This work shows that 3D printed

  14. Contextually Guided Semantic Labeling and Search for 3D Point Clouds

    CERN Document Server

    Anand, Abhishek; Joachims, Thorsten; Saxena, Ashutosh

    2011-01-01

    RGB-D cameras, which give an RGB image to- gether with depths, are becoming increasingly popular for robotic perception. In this paper, we address the task of detecting commonly found objects in the 3D point cloud of indoor scenes obtained from such cameras. Our method uses a graphical model that captures various features and contextual relations, including the local visual appearance and shape cues, object co-occurence relationships and geometric relationships. With a large number of object classes and relations, the model's parsimony becomes important and we address that by using multiple types of edge potentials. We train the model using a maximum-margin learning approach. In our experiments over a total of 52 3D scenes of homes and offices (composed from about 550 views), we get a performance of 84.06% and 73.38% in labeling office and home scenes respectively for 17 object classes each. We also present a method for a robot to search for an object using the learned model and the contextual information ava...

  15. 3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies

    Science.gov (United States)

    Periverzov, Frol; Ilieş, Horea T.

    2012-09-01

    Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

  16. Flatbed-type 3D display systems using integral imaging method

    Science.gov (United States)

    Hirayama, Yuzo; Nagatani, Hiroyuki; Saishu, Tatsuo; Fukushima, Rieko; Taira, Kazuki

    2006-10-01

    We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-D images that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-D images on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-D images to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-D images. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.

  17. Online 3D terrain visualisation using Unity 3D game engine: A comparison of different contour intervals terrain data draped with UAV images

    Science.gov (United States)

    Hafiz Mahayudin, Mohd; Che Mat, Ruzinoor

    2016-06-01

    The main objective of this paper is to discuss on the effectiveness of visualising terrain draped with Unmanned Aerial Vehicle (UAV) images generated from different contour intervals using Unity 3D game engine in online environment. The study area that was tested in this project was oil palm plantation at Sintok, Kedah. The contour data used for this study are divided into three different intervals which are 1m, 3m and 5m. ArcGIS software were used to clip the contour data and also UAV images data to be similar size for the overlaying process. The Unity 3D game engine was used as the main platform for developing the system due to its capabilities which can be launch in different platform. The clipped contour data and UAV images data were process and exported into the web format using Unity 3D. Then process continue by publishing it into the web server for comparing the effectiveness of different 3D terrain data (contour data) draped with UAV images. The effectiveness is compared based on the data size, loading time (office and out-of-office hours), response time, visualisation quality, and frame per second (fps). The results were suggest which contour interval is better for developing an effective online 3D terrain visualisation draped with UAV images using Unity 3D game engine. It therefore benefits decision maker and planner related to this field decide on which contour is applicable for their task.

  18. The diagnostic value of 3D spiral CT imaging of cholangiopancreatic ducts on obstructive jaundice

    Institute of Scientific and Technical Information of China (English)

    Linquan Wu; Xiangbao Yin; Qingshan Wang; Bohua Wu; Xiao Li; Huaqun Fu

    2011-01-01

    Objective: Computerized tomography (CT) plays an important role in the diagnosis of diseases of biliary tract. Recently, three dimensions (3D) spiral CT imaging has been used in surgical diseases gradually. This study was designed to evaluate the diagnostic value of 3D spiral CT imaging of cholangiopancreatic ducts on obstructive jaundice. Methods: Thirty patients with obstructive jaundice had received B-mode ultrasonography, CT, percutaneous transhepatic cholangiography (PTC) or endoscopic retrograde cholangiopancreatography (ERCP), and 3D spiral CT imaging of cholangiopancreatic ducts preoperatively. Then the diagnose accordance rate of these examinational methods were compared after operations. Results: The diagnose accordance rate of 3D spiral CT imaging of cholangiopancreatic ducts was higher than those of B-mode ultraso-nography, CT, or single PTC or ERCP, which showed clear images of bile duct tree and pathological changes. As to malignant obstructive jaundice, this examinational technique could clearly display the adjacent relationship between tumor and liver tissue, biliary ducts, blood vessels, and intrahepatic metastases. Conclusion: 3D spiral CT imaging of cholangiopancreatic ducts has significant value for obstructive diseases of biliary ducts, which provides effective evidence for the feasibility of tumor-resection and surgical options.

  19. Midsagittal plane extraction from brain images based on 3D SIFT

    International Nuclear Information System (INIS)

    Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91° and an average roll angle error no more than 0.89°. (paper)

  20. Detection of Connective Tissue Disorders from 3D Aortic MR Images Using Independent Component Analysis

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Zhao, Fei; Zhang, Honghai;

    2006-01-01

    A computer-aided diagnosis (CAD) method is reported that allows the objective identification of subjects with connective tissue disorders from 3D aortic MR images using segmentation and independent component analysis (ICA). The first step to extend the model to 4D (3D + time) has also been taken....... ICA is an effective tool for connective tissue disease detection in the presence of sparse data using prior knowledge to order the components, and the components can be inspected visually. 3D+time MR image data sets acquired from 31 normal and connective tissue disorder subjects at end-diastole (R......-wave peak) and at 45\\$\\backslash\\$% of the R-R interval were used to evaluate the performance of our method. The automated 3D segmentation result produced accurate aortic surfaces covering the aorta. The CAD method distinguished between normal and connective tissue disorder subjects with a classification...

  1. 3D bioprinting matrices with controlled pore structure and release function guide in vitro self-organization of sweat gland

    Science.gov (United States)

    Liu, Nanbo; Huang, Sha; Yao, Bin; Xie, Jiangfan; Wu, Xu; Fu, Xiaobing

    2016-01-01

    3D bioprinting matrices are novel platforms for tissue regeneration. Tissue self-organization is a critical process during regeneration that implies the features of organogenesis. However, it is not clear from the current evidences whether 3D printed construct plays a role in guiding tissue self-organization in vitro. Based on our previous study, we bioprinted a 3D matrix as the restrictive niche for direct sweat gland differentiation of epidermal progenitors by different pore structure (300-μm or 400-μm nozzle diameters printed) and reported a long-term gradual transition of differentiated cells into glandular morphogenesis occurs within the 3D construct in vitro. At the initial 14-day culture, an accelerated cell differentiation was achieved with inductive cues released along with gelatin reduction. After protein release completed, the 3D construct guide the self-organized formation of sweat gland tissues, which is similar to that of the natural developmental process. However, glandular morphogenesis was only observed in 300-μm–printed constructs. In the absence of 3D architectural support, glandular morphogenesis was not occurred. This striking finding made us to identify a previously unknown role of the 3D-printed structure in glandular tissue regeneration, and this self-organizing strategy can be applied to forming other tissues in vitro. PMID:27694985

  2. Synthesis of 3D Model of a Magnetic Field-Influenced Body from a Single Image

    Science.gov (United States)

    Wang, Cuilan; Newman, Timothy; Gallagher, Dennis

    2006-01-01

    A method for recovery of a 3D model of a cloud-like structure that is in motion and deforming but approximately governed by magnetic field properties is described. The method allows recovery of the model from a single intensity image in which the structure's silhouette can be observed. The method exploits envelope theory and a magnetic field model. Given one intensity image and the segmented silhouette in the image, the method proceeds without human intervention to produce the 3D model. In addition to allowing 3D model synthesis, the method's capability to yield a very compact description offers further utility. Application of the method to several real-world images is demonstrated.

  3. 3D image copyright protection based on cellular automata transform and direct smart pixel mapping

    Science.gov (United States)

    Li, Xiao-Wei; Kim, Seok-Tae; Lee, In-Kwon

    2014-10-01

    We propose a three-dimensional (3D) watermarking system with the direct smart pixel mapping algorithm to improve the resolution of the reconstructed 3D watermark plane images. The depth-converted elemental image array (EIA) is obtained through the computational pixel mapping method. In the watermark embedding process, the depth-converted EIA is first scrambled by using the Arnold transform, which is then embedded in the middle frequency of the cellular automata (CA) transform. Compared with conventional computational integral imaging reconstruction (CIIR) methods, this proposed scheme gives us a higher resolution of the reconstructed 3D plane images by using the quality-enhanced depth-converted EIA. The proposed method, which can obtain many transform planes for embedding watermark data, uses CA transforms with various gateway values. To prove the effectiveness of the proposed method, we present the results of our preliminary experiments.

  4. Single-pixel 3D imaging with time-based depth resolution

    CERN Document Server

    Sun, Ming-Jie; Gibson, Graham M; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J

    2016-01-01

    Time-of-flight three dimensional imaging is an important tool for many applications, such as object recognition and remote sensing. Unlike conventional imaging approach using pixelated detector array, single-pixel imaging based on projected patterns, such as Hadamard patterns, utilises an alternative strategy to acquire information with sampling basis. Here we show a modified single-pixel camera using a pulsed illumination source and a high-speed photodiode, capable of reconstructing 128x128 pixel resolution 3D scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, we demonstrate continuous real-time 3D video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost 3D imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  5. 3D X-ray microscopy: image formation, tomography and instrumentation

    OpenAIRE

    Selin, Mårten

    2016-01-01

    Tomography in soft X-ray microscopy is an emerging technique for obtaining quantitative 3D structural information about cells. One of its strengths, compared with other techniques, is that it can image intact cells in their near-native state at a few 10 nm’s resolution, without staining. However, the methods for reconstructing 3D-data rely on algorithms that assume projection data, which the images are generally not due to the imaging systems’ limited depth of focus. To bring out the full pot...

  6. Fully 3D PET image reconstruction with a 4D sinogram blurring kernel

    Energy Technology Data Exchange (ETDEWEB)

    Tohme, Michel S.; Qi, Jinyi [California Univ., Davis, CA (United States). Dept. of Biomedical Engineering; Zhou, Jian

    2011-07-01

    Accurately modeling PET system response is essential for high-resolution image reconstruction. Traditionally, sinogram blurring effects are modeled as a 2D blur in each sinogram plane. Such 2D blurring kernel is insufficient for fully 3D PET data, which has four dimensions. In this paper, we implement a fully 3D PET image reconstruction using a 4D sinogram blurring kernel estimated from point source scans and perform phantom experiments to evaluate the improvements in image quality over methods with existing 2D blurring kernels. The results show that the proposed reconstruction method can achieve better spatial resolution and contrast recovery than existing methods. (orig.)

  7. Quantitative Morphological and Biochemical Studies on Human Downy Hairs using 3-D Quantitative Phase Imaging

    CERN Document Server

    Lee, SangYun; Lee, Yuhyun; Park, Sungjin; Shin, Heejae; Yang, Jongwon; Ko, Kwanhong; Park, HyunJoo; Park, YongKeun

    2015-01-01

    This study presents the morphological and biochemical findings on human downy arm hairs using 3-D quantitative phase imaging techniques. 3-D refractive index tomograms and high-resolution 2-D synthetic aperture images of individual downy arm hairs were measured using a Mach-Zehnder laser interferometric microscopy equipped with a two-axis galvanometer mirror. From the measured quantitative images, the biochemical and morphological parameters of downy hairs were non-invasively quantified including the mean refractive index, volume, cylinder, and effective radius of individual hairs. In addition, the effects of hydrogen peroxide on individual downy hairs were investigated.

  8. DART : a 3D model for remote sensing images and radiative budget of earth surfaces

    OpenAIRE

    Gastellu-Etchegorry, J.P.; Grau, E.; Lauret, N.

    2012-01-01

    Modeling the radiative behavior and the energy budget of land surfaces is relevant for many scientific domains such as the study of vegetation functioning with remotely acquired information. DART model (Discrete Anisotropic Radiative Transfer) is developed since 1992. It is one of the most complete 3D models in this domain. It simulates radiative transfer (R.T.) in the optical domain: 3D radiative budget and remote sensing images (i.e., radiance, reflectance, brightness temperature) of vegeta...

  9. Fusion of laser and image sensory data for 3-D modeling of the free navigation space

    Science.gov (United States)

    Mass, M.; Moghaddamzadeh, A.; Bourbakis, N.

    1994-01-01

    A fusion technique which combines two different types of sensory data for 3-D modeling of a navigation space is presented. The sensory data is generated by a vision camera and a laser scanner. The problem of different resolutions for these sensory data was solved by reduced image resolution, fusion of different data, and use of a fuzzy image segmentation technique.

  10. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    Directory of Open Access Journals (Sweden)

    Saeed Seyyedi

    2013-01-01

    Full Text Available Digital breast tomosynthesis (DBT is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART and total variation regularized reconstruction techniques (ART+TV are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM values.

  11. 3D FACE RECOGNITION FROM RANGE IMAGES BASED ON CURVATURE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Suranjan Ganguly

    2014-02-01

    Full Text Available In this paper, we present a novel approach for three-dimensional face recognition by extracting the curvature maps from range images. There are four types of curvature maps: Gaussian, Mean, Maximum and Minimum curvature maps. These curvature maps are used as a feature for 3D face recognition purpose. The dimension of these feature vectors is reduced using Singular Value Decomposition (SVD technique. Now from calculated three components of SVD, the non-negative values of ‘S’ part of SVD is ranked and used as feature vector. In this proposed method, two pair-wise curvature computations are done. One is Mean, and Maximum curvature pair and another is Gaussian and Mean curvature pair. These are used to compare the result for better recognition rate. This automated 3D face recognition system is focused in different directions like, frontal pose with expression and illumination variation, frontal face along with registered face, only registered face and registered face from different pose orientation across X, Y and Z axes. 3D face images used for this research work are taken from FRAV3D database. The pose variation of 3D facial image is being registered to frontal pose by applying one to all registration technique then curvature mapping is applied on registered face images along with remaining frontal face images. For the classification and recognition purpose five layer feed-forward back propagation neural network classifiers is used, and the corresponding result is discussed in section 4.

  12. 3-D MRI/CT fusion imaging of the lumbar spine

    Energy Technology Data Exchange (ETDEWEB)

    Yamanaka, Yuki; Kamogawa, Junji; Misaki, Hiroshi; Kamada, Kazuo; Okuda, Shunsuke; Morino, Tadao; Ogata, Tadanori; Yamamoto, Haruyasu [Ehime University, Department of Bone and Joint Surgery, Toon-shi, Ehime (Japan); Katagi, Ryosuke; Kodama, Kazuaki [Katagi Neurological Surgery, Imabari-shi, Ehime (Japan)

    2010-03-15

    The objective was to demonstrate the feasibility of MRI/CT fusion in demonstrating lumbar nerve root compromise. We combined 3-dimensional (3-D) computed tomography (CT) imaging of bone with 3-D magnetic resonance imaging (MRI) of neural architecture (cauda equina and nerve roots) for two patients using VirtualPlace software. Although the pathological condition of nerve roots could not be assessed using MRI, myelography or CT myelography, 3-D MRI/CT fusion imaging enabled unambiguous, 3-D confirmation of the pathological state and courses of nerve roots, both inside and outside the foraminal arch, as well as thickening of the ligamentum flavum and the locations, forms and numbers of dorsal root ganglia. Positional relationships between intervertebral discs or bony spurs and nerve roots could also be depicted. Use of 3-D MRI/CT fusion imaging for the lumbar vertebral region successfully revealed the relationship between bone construction (bones, intervertebral joints, and intervertebral disks) and neural architecture (cauda equina and nerve roots) on a single film, three-dimensionally and in color. Such images may be useful in elucidating complex neurological conditions such as degenerative lumbar scoliosis(DLS), as well as in diagnosis and the planning of minimally invasive surgery. (orig.)

  13. Digital holographic microscopy for imaging growth and treatment response in 3D tumor models

    Science.gov (United States)

    Li, Yuyu; Petrovic, Ljubica; Celli, Jonathan P.; Yelleswarapu, Chandra S.

    2014-03-01

    While three-dimensional tumor models have emerged as valuable tools in cancer research, the ability to longitudinally visualize the 3D tumor architecture restored by these systems is limited with microscopy techniques that provide only qualitative insight into sample depth, or which require terminal fixation for depth-resolved 3D imaging. Here we report the use of digital holographic microscopy (DHM) as a viable microscopy approach for quantitative, non-destructive longitudinal imaging of in vitro 3D tumor models. Following established methods we prepared 3D cultures of pancreatic cancer cells in overlay geometry on extracellular matrix beds and obtained digital holograms at multiple timepoints throughout the duration of growth. The holograms were digitally processed and the unwrapped phase images were obtained to quantify nodule thickness over time under normal growth, and in cultures subject to chemotherapy treatment. In this manner total nodule volumes are rapidly estimated and demonstrated here to show contrasting time dependent changes during growth and in response to treatment. This work suggests the utility of DHM to quantify changes in 3D structure over time and suggests the further development of this approach for time-lapse monitoring of 3D morphological changes during growth and in response to treatment that would otherwise be impractical to visualize.

  14. Automatic Texture Reconstruction of 3d City Model from Oblique Images

    Science.gov (United States)

    Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang

    2016-06-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  15. Non-contrast enhanced MR venography using 3D fresh blood imaging (FBI). Initial experience

    Energy Technology Data Exchange (ETDEWEB)

    Yokoyama, Kenichi; Nitatori, Toshiaki; Inaoka, Sayuki; Takahara, Taro; Hachiya, Junichi [Kyorin Univ., Mitaka, Tokyo (Japan). School of Medicine

    2001-10-01

    This study examined the efficacy of 3D-fresh blood imaging (FBI) in patients with venous disease in the iliac region to lower extremity. Fourteen patients with venous disease were examined [8 deep venous thrombosis (DVT) and 6 varix] by 3D-FBI and 2D-TOF MRA. ALL FBI images and 2D-TOF images were evaluated in terms of visualization of the disease and compared with conventional X-ray venography (CV). The total scan time of 3D-FBI ranged from 3 min 24 sec to 4 min 52 sec. 3D-FBI was positive in all 23 anatomical levels in which DVT was diagnosed by CV (100% sensitivity) as well as 2D-TOF. The delineation of collateral veins was superior or equal to that of 2D-TOF. 3D-FBI allowed depiction of varices in five of six cases; however, in one case, the evaluation was limited because the separation of arteries from veins was difficult. The 3D-FBI technique, which allows iliac to peripheral MR venography without contrast medium within a short acquisition time, is considered clinically useful. (author)

  16. Utilization of 3-D Imaging Flash Lidar Technology for Autonomous Safe Landing on Planetary Bodies

    Science.gov (United States)

    Amzajerdian, Farzin; Vanek, Michael; Petway, Larry; Pierrotter, Diego; Busch, George; Bulyshev, Alexander

    2010-01-01

    NASA considers Flash Lidar a critical technology for enabling autonomous safe landing of future large robotic and crewed vehicles on the surface of the Moon and Mars. Flash Lidar can generate 3-Dimensional images of the terrain to identify hazardous features such as craters, rocks, and steep slopes during the final stages of descent and landing. The onboard flight computer can use the 3-D map of terrain to guide the vehicle to a safe site. The capabilities of Flash Lidar technology were evaluated through a series of static tests using a calibrated target and through dynamic tests aboard a helicopter and a fixed wing aircraft. The aircraft flight tests were performed over Moon-like terrain in the California and Nevada deserts. This paper briefly describes the Flash Lidar static and aircraft flight test results. These test results are analyzed against the landing application requirements to identify the areas of technology improvement. The ongoing technology advancement activities are then explained and their goals are described.

  17. Registration of Real-Time 3-D Ultrasound to Tomographic Images of the Abdominal Aorta.

    Science.gov (United States)

    Brekken, Reidar; Iversen, Daniel Høyer; Tangen, Geir Arne; Dahl, Torbjørn

    2016-08-01

    The purpose of this study was to develop an image-based method for registration of real-time 3-D ultrasound to computed tomography (CT) of the abdominal aorta, targeting future use in ultrasound-guided endovascular intervention. We proposed a method in which a surface model of the aortic wall was segmented from CT, and the approximate initial location of this model relative to the ultrasound volume was manually indicated. The model was iteratively transformed to automatically optimize correspondence to the ultrasound data. Feasibility was studied using data from a silicon phantom and in vivo data from a volunteer with previously acquired CT. Through visual evaluation, the ultrasound and CT data were seen to correspond well after registration. Both aortic lumen and branching arteries were well aligned. The processing was done offline, and the registration took approximately 0.2 s per ultrasound volume. The results encourage further patient studies to investigate accuracy, robustness and clinical value of the approach. PMID:27156015

  18. Comparison of S3D Display Technology on Image Quality and Viewing Experiences: Active-Shutter 3d TV vs. Passive-Polarized 3DTV

    Directory of Open Access Journals (Sweden)

    Yu-Chi Tai, PhD

    2014-05-01

    Full Text Available Background: Stereoscopic 3D TV systems convey depth perception to the viewer by delivering to each eye separately filtered images that represent two slightly different perspectives. Currently two primary technologies are used in S3D televisions: Active shutter systems, which use alternate frame sequencing to deliver a full-frame image to one eye at a time at a fast refresh rate, and Passive polarized systems, which superimpose the two half-frame left-eye and right-eye images at the same time through different polarizing filters. Methods: We compare visual performance in discerning details and perceiving depth, as well as the comfort and perceived display quality in viewing an S3D movie. Results: Our results show that, in presenting details of small targets and in showing low-contrast stimuli, the Active system was significantly better than the Passive in 2D mode, but there was no significant difference between them in 3D mode. Subjects performed better on Passive than Active in 3D mode on a task requiring small vergence changes and quick re-acquisition of stereopsis – a skill related to vergence efficiency while viewing S3D displays. When viewing movies in 3D mode, there was no difference in symptoms of discomfort between Active and Passive systems. When the two systems were put side by side with selected 3D-movie scenes, all of the subjective measures of perceived display quality in 3D mode favored the Passive system, and 10 of 14 comparisons were statistically significant. The Passive system was rated significantly better for sense of immersion, motion smoothness, clarity, color, and 5 categories related to the glasses. Conclusion: Overall, participants felt that it was easier to look at the Passive system for a longer period than the Active system, and the Passive display was selected as the preferred display by 75% (p = 0.0000211 of the subjects.

  19. Land surface temperature from INSAT-3D imager data: Retrieval and assimilation in NWP model

    Science.gov (United States)

    Singh, Randhir; Singh, Charu; Ojha, Satya P.; Kumar, A. Senthil; Kishtawal, C. M.; Kumar, A. S. Kiran

    2016-06-01

    A new algorithm is developed for retrieving the land surface temperature (LST) from the imager radiance observations on board geostationary operational Indian National Satellite (INSAT-3D). The algorithm is developed using the two thermal infrared channels (TIR1 10.3-11.3 µm and TIR2 11.5-12.5 µm) via genetic algorithm (GA). The transfer function that relates LST and thermal radiances is developed using radiative transfer model simulated database. The developed algorithm has been applied on the INSAT-3D observed radiances, and LST retrieved from the developed algorithm has been validated with Moderate Resolution Imaging Spectroradiometer land surface temperature (LST) product. The developed algorithm demonstrates a good accuracy, without significant bias and standard deviations of 1.78 K and 1.41 K during daytime and nighttime, respectively. The newly proposed algorithm performs better than the operational algorithm used for LST retrieval from INSAT-3D satellite. Further, a set of data assimilation experiments is conducted with the Weather Research and Forecasting (WRF) model to assess the impact of INSAT-3D LST on model forecast skill over the Indian region. The assimilation experiments demonstrated a positive impact of the assimilated INSAT-3D LST, particularly on the lower tropospheric temperature and moisture forecasts. The temperature and moisture forecast errors are reduced (as large as 8-10%) with the assimilation of INSAT-3D LST, when compared to forecasts that were obtained without the assimilation of INSAT-3D LST. Results of the additional experiments of comparative performance of two LST products, retrieved from operational and newly proposed algorithms, indicate that the impact of INSAT-3D LST retrieved using newly proposed algorithm is significantly larger compared to the impact of INSAT-3D LST retrieved using operational algorithm.

  20. Parametric modelling and segmentation of vertebral bodies in 3D CT and MR spine images

    Science.gov (United States)

    Štern, Darko; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2011-12-01

    Accurate and objective evaluation of vertebral deformations is of significant importance in clinical diagnostics and therapy of pathological conditions affecting the spine. Although modern clinical practice is focused on three-dimensional (3D) computed tomography (CT) and magnetic resonance (MR) imaging techniques, the established methods for evaluation of vertebral deformations are limited to measuring deformations in two-dimensional (2D) x-ray images. In this paper, we propose a method for quantitative description of vertebral body deformations by efficient modelling and segmentation of vertebral bodies in 3D. The deformations are evaluated from the parameters of a 3D superquadric model, which is initialized as an elliptical cylinder and then gradually deformed by introducing transformations that yield a more detailed representation of the vertebral body shape. After modelling the vertebral body shape with 25 clinically meaningful parameters and the vertebral body pose with six rigid body parameters, the 3D model is aligned to the observed vertebral body in the 3D image. The performance of the method was evaluated on 75 vertebrae from CT and 75 vertebrae from T2-weighted MR spine images, extracted from the thoracolumbar part of normal and pathological spines. The results show that the proposed method can be used for 3D segmentation of vertebral bodies in CT and MR images, as the proposed 3D model is able to describe both normal and pathological vertebral body deformations. The method may therefore be used for initialization of whole vertebra segmentation or for quantitative measurement of vertebral body deformations.

  1. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Chen, G [University of Wisconsin, Madison, WI (United States); Pan, X [University Chicago, Chicago, IL (United States); Stayman, J [Johns Hopkins University, Baltimore, MD (United States); Samei, E [Duke University Medical Center, Durham, NC (United States)

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  2. Optimal Image Stitching for Concrete Bridge Bottom Surfaces Aided by 3d Structure Lines

    Science.gov (United States)

    Liu, Yahui; Yao, Jian; Liu, Kang; Lu, Xiaohu; Xia, Menghan

    2016-06-01

    Crack detection for bridge bottom surfaces via remote sensing techniques is undergoing a revolution in the last few years. For such applications, a large amount of images, acquired with high-resolution industrial cameras close to the bottom surfaces with some mobile platform, are required to be stitched into a wide-view single composite image. The conventional idea of stitching a panorama with the affine model or the homographic model always suffers a series of serious problems due to poor texture and out-of-focus blurring introduced by depth of field. In this paper, we present a novel method to seamlessly stitch these images aided by 3D structure lines of bridge bottom surfaces, which are extracted from 3D camera data. First, we propose to initially align each image in geometry based on its rough position and orientation acquired with both a laser range finder (LRF) and a high-precision incremental encoder, and these images are divided into several groups with the rough position and orientation data. Secondly, the 3D structure lines of bridge bottom surfaces are extracted from the 3D cloud points acquired with 3D cameras, which impose additional strong constraints on geometrical alignment of structure lines in adjacent images to perform a position and orientation optimization in each group to increase the local consistency. Thirdly, a homographic refinement between groups is applied to increase the global consistency. Finally, we apply a multi-band blending algorithm to generate a large-view single composite image as seamlessly as possible, which greatly eliminates both the luminance differences and the color deviations between images and further conceals image parallax. Experimental results on a set of representative images acquired from real bridge bottom surfaces illustrate the superiority of our proposed approaches.

  3. Common crus aplasia: diagnosis by 3D volume rendering imaging using 3DFT-CISS sequence

    Energy Technology Data Exchange (ETDEWEB)

    Kim, H.J. E-mail: hakjink@pusan.ac.kr; Song, J.W.; Chon, K.-M.; Goh, E.-K

    2004-09-01

    AIM: The purpose of this study was to evaluate the findings of three-dimensional (3D) volume rendering (VR) imaging in common crus aplasia (CCA) of the inner ear. MATERIALS AND METHODS: Using 3D VR imaging of temporal bone constructive interference in steady state (CISS) magnetic resonance (MR) images, we retrospectively reviewed seven inner ears of six children who were candidates for cochlear implants and who had been diagnosed with CCA. As controls, we used the same method to examine 402 inner ears of 201 patients who had no clinical symptoms or signs of sensorineural hearing loss. Temporal bone MR imaging (MRI) was performed with a 1.5 T MR machine using a CISS sequence, and VR of the inner ear was performed on a work station. Morphological image analysis was performed on rotation views of 3D VR images. RESULTS: In all seven cases, CCA was diagnosed by the absence of the common crus. The remaining superior semicircular canal (SCC) was normal in five and hypoplastic in two inner ears, while the posterior SCC was normal in all seven. One patient showed bilateral symmetrical CCA. Complicated combined anomalies were seen in the cochlea, vestibule and lateral SCC. CONCLUSION: 3D VR imaging findings with MR CISS sequence can directly diagnose CCA. This technique may be useful in delineating detailed anomalies of SCCs.

  4. Audiovisual biofeedback improves image quality and reduces scan time for respiratory-gated 3D MRI

    Science.gov (United States)

    Lee, D.; Greer, P. B.; Arm, J.; Keall, P.; Kim, T.

    2014-03-01

    The purpose of this study was to test the hypothesis that audiovisual (AV) biofeedback can improve image quality and reduce scan time for respiratory-gated 3D thoracic MRI. For five healthy human subjects respiratory motion guidance in MR scans was provided using an AV biofeedback system, utilizing real-time respiratory motion signals. To investigate the improvement of respiratory-gated 3D MR images between free breathing (FB) and AV biofeedback (AV), each subject underwent two imaging sessions. Respiratory-related motion artifacts and imaging time were qualitatively evaluated in addition to the reproducibility of external (abdominal) motion. In the results, 3D MR images in AV biofeedback showed more anatomic information such as a clear distinction of diaphragm, lung lobes and sharper organ boundaries. The scan time was reduced from 401±215 s in FB to 334±94 s in AV (p-value 0.36). The root mean square variation of the displacement and period of the abdominal motion was reduced from 0.4±0.22 cm and 2.8±2.5 s in FB to 0.1±0.15 cm and 0.9±1.3 s in AV (p-value of displacement audiovisual biofeedback improves image quality and reduces scan time for respiratory-gated 3D MRI. These results suggest that AV biofeedback has the potential to be a useful motion management tool in medical imaging and radiation therapy procedures.

  5. Preliminary clinical application of contrast-enhanced MR angiography using 3D time-resolved imaging of contrast kinetics(3D-TRICKS)

    Institute of Scientific and Technical Information of China (English)

    YANG Chun-shan; LIU Shi-yuan; XIAO Xiang-sheng; FENG Yun; LI Hui-min; XIAO Shan; GONG Wan-qing

    2007-01-01

    Objective: To introduce a new better contrast-enhanced MR angiographic method, named 3D time-resolved imaging of contrast kinetics (3D-TRICKS). Methods: TRICKS is a high temporal resolution (2-6 s) MR angiographic technique using a short TR(4 ms) and TE(1.5 ms), partial echo sampling, in which central part of k-space is updated more frequently than the peripheral part. TRICKS pre-contrast mask 3D images are firstly scanned, and then the bolus injecting of Gd-DTPA, 15-20 sequential 3D images are acquired. The reconstructed 3D images, subtraction of contrast 3D images with mask images, are conceptually similar to a catheter-based intra-arterial digital subtraction angiographic series(DSA). Thirty patients underwent contrast-enhanced MR angiography using 3D-TRICKS. Results: Totally 12 vertebral arteries were well displayed on TRICKS, in which 7 were normal, 1 demonstrated bilateral vertebral artery stenosis, 4 had unilateral vertebral artery stenosis and 1 was accompanied with the same lateral carotid artery bifurcation stenosis. Four cases of bilateral renal arteries were normal, 1 transplanted kidney artery showed as normal and 1 transplanted kidney artery showed stenosis. 2 cerebral arteries were normal, 1 had sagittal sinus thrombosis and 1 displayed intracranial arteriovenous malformation. 3 pulmonary arteries were normal, 1 showed pulmonary artery thrombosis and 1 revealed pulmonary sequestration's abnormal feeding artery and draining vein. One left lower limb fibrolipoma showed feeding artery. One displayed radial-ulnar artery artificial fistula stenosis. One revealed left antebrachium hemangioma. Conclusion: TRICKS can clearly delineate most body vascular system and reveal most vascular abnormality. It possesses convenience and high successful rate, which make it the first choice of displaying most vascular abnormality.

  6. A web-based 3D medical image collaborative processing system with videoconference

    Science.gov (United States)

    Luo, Sanbi; Han, Jun; Huang, Yonggang

    2013-07-01

    Three dimension medical images have been playing an irreplaceable role in realms of medical treatment, teaching, and research. However, collaborative processing and visualization of 3D medical images on Internet is still one of the biggest challenges to support these activities. Consequently, we present a new application approach for web-based synchronized collaborative processing and visualization of 3D medical Images. Meanwhile, a web-based videoconference function is provided to enhance the performance of the whole system. All the functions of the system can be available with common Web-browsers conveniently, without any extra requirement of client installation. In the end, this paper evaluates the prototype system using 3D medical data sets, which demonstrates the good performance of our system.

  7. 3-D imaging of particle tracks in solid state nuclear track detectors

    Directory of Open Access Journals (Sweden)

    D. Wertheim

    2010-05-01

    Full Text Available It has been suggested that 3 to 5% of total lung cancer deaths in the UK may be associated with elevated radon concentration. Radon gas levels can be assessed using CR-39 plastic detectors which are often assessed by 2-D image analysis of surface images. 3-D analysis has the potential to provide information relating to the angle at which alpha particles impinge on the detector. In this study we used a "LEXT" OLS3100 confocal laser scanning microscope (Olympus Corporation, Tokyo, Japan to image tracks on five CR-39 detectors. We were able to identify several patterns of single and coalescing tracks from 3-D visualisation. Thus this method may provide a means of detailed 3-D analysis of Solid State Nuclear Track Detectors.

  8. 3-D imaging of particle tracks in solid state nuclear track detectors

    Science.gov (United States)

    Wertheim, D.; Gillmore, G.; Brown, L.; Petford, N.

    2010-05-01

    It has been suggested that 3 to 5% of total lung cancer deaths in the UK may be associated with elevated radon concentration. Radon gas levels can be assessed using CR-39 plastic detectors which are often assessed by 2-D image analysis of surface images. 3-D analysis has the potential to provide information relating to the angle at which alpha particles impinge on the detector. In this study we used a "LEXT" OLS3100 confocal laser scanning microscope (Olympus Corporation, Tokyo, Japan) to image tracks on five CR-39 detectors. We were able to identify several patterns of single and coalescing tracks from 3-D visualisation. Thus this method may provide a means of detailed 3-D analysis of Solid State Nuclear Track Detectors.

  9. Improvement of wells turbine performance by means of 3D guide vanes; Sanjigen annai hane ni yoru wells turbine seino no kaizen

    Energy Technology Data Exchange (ETDEWEB)

    Takao, M.; Kim, T.H. [Saga University, Saga (Japan); Setoguchi, T. [Saga University, Saga (Japan). Faculty of Science and Engineering; Inoue, M. [Kyushu University, Fukuoka (Japan). Faculty of Engineering

    2000-02-25

    Performance of a Wells turbine was improved by equipping 3D guide vanes before and behind a rotor. For further improvement, 3D guide vanes have been proposed in this paper. The performance of the Wells turbine with 2D and 3D guide vanes have been investigated experimentally by model testing under steady flow conditions. Then, the running and starting characteristics in irregular ocean waves have been obtained by a computer simulation. As a result, it is found that both of the running and starting characteristics of the Wells turbine with 3D guide vanes are superior to those of the turbine with 2D guide vanes. (author)

  10. 3D change detection at street level using mobile laser scanning point clouds and terrestrial images

    Science.gov (United States)

    Qin, Rongjun; Gruen, Armin

    2014-04-01

    Automatic change detection and geo-database updating in the urban environment are difficult tasks. There has been much research on detecting changes with satellite and aerial images, but studies have rarely been performed at the street level, which is complex in its 3D geometry. Contemporary geo-databases include 3D street-level objects, which demand frequent data updating. Terrestrial images provides rich texture information for change detection, but the change detection with terrestrial images from different epochs sometimes faces problems with illumination changes, perspective distortions and unreliable 3D geometry caused by the lack of performance of automatic image matchers, while mobile laser scanning (MLS) data acquired from different epochs provides accurate 3D geometry for change detection, but is very expensive for periodical acquisition. This paper proposes a new method for change detection at street level by using combination of MLS point clouds and terrestrial images: the accurate but expensive MLS data acquired from an early epoch serves as the reference, and terrestrial images or photogrammetric images captured from an image-based mobile mapping system (MMS) at a later epoch are used to detect the geometrical changes between different epochs. The method will automatically mark the possible changes in each view, which provides a cost-efficient method for frequent data updating. The methodology is divided into several steps. In the first step, the point clouds are recorded by the MLS system and processed, with data cleaned and classified by semi-automatic means. In the second step, terrestrial images or mobile mapping images at a later epoch are taken and registered to the point cloud, and then point clouds are projected on each image by a weighted window based z-buffering method for view dependent 2D triangulation. In the next step, stereo pairs of the terrestrial images are rectified and re-projected between each other to check the geometrical

  11. Hybrid wide-field and scanning microscopy for high-speed 3D imaging.

    Science.gov (United States)

    Duan, Yubo; Chen, Nanguang

    2015-11-15

    Wide-field optical microscopy is efficient and robust in biological imaging, but it lacks depth sectioning. In contrast, scanning microscopic techniques, such as confocal microscopy and multiphoton microscopy, have been successfully used for three-dimensional (3D) imaging with optical sectioning capability. However, these microscopic techniques are not very suitable for dynamic real-time imaging because they usually take a long time for temporal and spatial scanning. Here, a hybrid imaging technique combining wide-field microscopy and scanning microscopy is proposed to accelerate the image acquisition process while maintaining the 3D optical sectioning capability. The performance was demonstrated by proof-of-concept imaging experiments with fluorescent beads and zebrafish liver.

  12. MUTUAL INFORMATION BASED 3D NON-RIGID REGISTRATION OF CT/MR ABDOMEN IMAGES

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A mutual information based 3D non-rigid registration approach was proposed for the registration of deformable CT/MR body abdomen images. The Parzen Windows Density Estimation (PWDE) method is adopted to calculate the mutual information between the two modals of CT and MRI abdomen images. By maximizing MI between the CT and MR volume images, the overlapping part of them reaches the biggest, which means that the two body images of CT and MR matches best to each other. Visible Human Project (VHP) Male abdomen CT and MRI Data are used as experimental data sets. The experimental results indicate that this approach of non-rigid 3D registration of CT/MR body abdominal images can be achieved effectively and automatically, without any prior processing procedures such as segmentation and feature extraction, but has a main drawback of very long computation time. Key words: medical image registration; multi-modality; mutual information; non-rigid; Parzen window density estimation

  13. Non-invasive single-shot 3D imaging through a scattering layer using speckle interferometry

    CERN Document Server

    Somkuwar, Atul S; R., Vinu; Park, Yongkeun; Singh, Rakesh Kumar

    2015-01-01

    Optical imaging through complex scattering media is one of the major technical challenges with important applications in many research fields, ranging from biomedical imaging, astronomical telescopy, and spatially multiplex optical communications. Although various approaches for imaging though turbid layer have been recently proposed, they had been limited to two-dimensional imaging. Here we propose and experimentally demonstrate an approach for three-dimensional single-shot imaging of objects hidden behind an opaque scattering layer. We demonstrate that under suitable conditions, it is possible to perform the 3D imaging to reconstruct the complex amplitude of objects situated at different depths.

  14. Evaluating 3D registration of CT-scan images using crest lines

    Science.gov (United States)

    Ayache, Nicholas; Gueziec, Andre P.; Thirion, Jean-Philippe; Gourdon, A.; Knoplioch, Jerome

    1993-06-01

    We consider the issue of matching 3D objects extracted from medical images. We show that crest lines computed on the object surfaces correspond to meaningful anatomical features, and that they are stable with respect to rigid transformations. We present the current chain of algorithmic modules which automatically extract the major crest lines in 3D CT-Scan images, and then use differential invariants on these lines to register together the 3D images with a high precision. The extraction of the crest lines is done by computing up to third order derivatives of the image intensity function with appropriate 3D filtering of the volumetric images, and by the 'marching lines' algorithm. The recovered lines are then approximated by splines curves, to compute at each point a number of differential invariants. Matching is finally performed by a new geometric hashing method. The whole chain is now completely automatic, and provides extremely robust and accurate results, even in the presence of severe occlusions. In this paper, we briefly describe the whole chain of processes, already presented to evaluate the accuracy of the approach on a couple of CT-scan images of a skull containing external markers.

  15. 3D nonrigid medical image registration using a new information theoretic measure

    International Nuclear Information System (INIS)

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen–Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy. (paper)

  16. 3D nonrigid medical image registration using a new information theoretic measure

    Science.gov (United States)

    Li, Bicao; Yang, Guanyu; Coatrieux, Jean Louis; Li, Baosheng; Shu, Huazhong

    2015-11-01

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen-Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy.

  17. Automated Image-Based Procedures for Accurate Artifacts 3D Modeling and Orthoimage Generation

    Directory of Open Access Journals (Sweden)

    Marc Pierrot-Deseilligny

    2011-12-01

    Full Text Available The accurate 3D documentation of architectures and heritages is getting very common and required in different application contexts. The potentialities of the image-based approach are nowadays very well-known but there is a lack of reliable, precise and flexible solutions, possibly open-source, which could be used for metric and accurate documentation or digital conservation and not only for simple visualization or web-based applications. The article presents a set of photogrammetric tools developed in order to derive accurate 3D point clouds and orthoimages for the digitization of archaeological and architectural objects. The aim is also to distribute free solutions (software, methodologies, guidelines, best practices, etc. based on 3D surveying and modeling experiences, useful in different application contexts (architecture, excavations, museum collections, heritage documentation, etc. and according to several representations needs (2D technical documentation, 3D reconstruction, web visualization, etc..

  18. Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

    Directory of Open Access Journals (Sweden)

    Tsap Leonid V

    2006-01-01

    Full Text Available The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell, and its state. Analysis of chromosome structure is significant in the detection of diseases, identification of chromosomal abnormalities, study of DNA structural conformation, in-depth study of chromosomal surface morphology, observation of in vivo behavior of the chromosomes over time, and in monitoring environmental gene mutations. The methodology incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.

  19. Computer assisted determination of acetabular cup orientation using 2D-3D image registration

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Guoyan; Zhang, Xuan [University of Bern, Institute for Surgical Technology and Biomechanics, Bern (Switzerland)

    2010-09-15

    2D-3D image-based registration methods have been developed to measure acetabular cup orientation after total hip arthroplasty (THA). These methods require registration of both the prosthesis and the CT images to 2D radiographs and compute implant position with respect to a reference. The application of these methods is limited in clinical practice due to two limitations: (1) the requirement of a computer-aided design (CAD) model of the prosthesis, which may be unavailable due to the proprietary concerns of the manufacturer, and (2) the requirement of either multiple radiographs or radiograph-specific calibration, usually unavailable for retrospective studies. In this paper, we propose a new method to address these limitations. A new formulation for determination of post-operative cup orientation, which couples a radiographic measurement with 2D-3D image matching, was developed. In our formulation, the radiographic measurement can be obtained with known methods so that the challenge lies in the 2D-3D image matching. To solve this problem, a hybrid 2D-3D registration scheme combining a landmark-to-ray 2D-3D alignment with a robust intensity-based 2D-3D registration was used. The hybrid 2D-3D registration scheme allows computing both the post-operative cup orientation with respect to an anatomical reference and the pelvic tilt and rotation with respect to the X-ray imaging table/plate. The method was validated using 2D adult cadaver hips. Using the hybrid 2D-3D registration scheme, our method showed a mean accuracy of 1.0 {+-} 0.7 (range from 0.1 to 2.0 ) for inclination and 1.7 {+-} 1.2 (range from 0.0 to 3.9 ) for anteversion, taking the measurements from post-operative CT images as ground truths. Our new solution formulation and the hybrid 2D-3D registration scheme facilitate estimation of post-operative cup orientation and measurement of pelvic tilt and rotation. (orig.)

  20. Acute Bochdalek hernia in an adult:A case report of a 3D image

    Institute of Scientific and Technical Information of China (English)

    Rejeb Imen; Chakroun-Walha Olfa; Ksibi Hichem; Nasri Abdennour; Chtara Kamilia; Chaari Adel; Rekik Noureddine

    2016-01-01

    A 61-year-old male was found to have a bilateral Bochdalek hernia on routine CT during admission for acute respiratory failure. The chest X-ray showed a left paracardiac mass having a diameter of 6 cm. This mass was initially considered as a mediastinal tumor. However, CT scan showed a bilateral large defect of the posteromedial portion of the diaphragm and mesenteric fat. 3D imaging was also useful for the stereographic perception of Bochdalek hernia. Although Bochdalek hernia is not rare, to our knowl-edge, this is the first case of Bochdalek hernia continued transverse colon observed by spiral CT 3D imaging.

  1. Note: An improved 3D imaging system for electron-electron coincidence measurements

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Yun Fei; Lee, Suk Kyoung; Adhikari, Pradip; Herath, Thushani; Lingenfelter, Steven; Winney, Alexander H.; Li, Wen, E-mail: wli@chem.wayne.edu [Department of Chemistry, Wayne State University, Detroit, Michigan 48202 (United States)

    2015-09-15

    We demonstrate an improved imaging system that can achieve highly efficient 3D detection of two electrons in coincidence. The imaging system is based on a fast frame complementary metal-oxide semiconductor camera and a high-speed waveform digitizer. We have shown previously that this detection system is capable of 3D detection of ions and electrons with good temporal and spatial resolution. Here, we show that with a new timing analysis algorithm, this system can achieve an unprecedented dead-time (<0.7 ns) and dead-space (<1 mm) when detecting two electrons. A true zero dead-time detection is also demonstrated.

  2. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    OpenAIRE

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingl...

  3. 3D high spectral and spatial resolution imaging of ex vivo mouse brain

    Energy Technology Data Exchange (ETDEWEB)

    Foxley, Sean, E-mail: sean.foxley@ndcn.ox.ac.uk; Karczmar, Gregory S. [Department of Radiology, University of Chicago, Chicago, Illinois 60637 (United States); Domowicz, Miriam [Department of Pediatrics, University of Chicago, Chicago, Illinois 60637 (United States); Schwartz, Nancy [Department of Pediatrics, Department of Biochemistry and Molecular Biology, University of Chicago, Chicago, Illinois 60637 (United States)

    2015-03-15

    Purpose: Widely used MRI methods show brain morphology both in vivo and ex vivo at very high resolution. Many of these methods (e.g., T{sub 2}{sup *}-weighted imaging, phase-sensitive imaging, or susceptibility-weighted imaging) are sensitive to local magnetic susceptibility gradients produced by subtle variations in tissue composition. However, the spectral resolution of commonly used methods is limited to maintain reasonable run-time combined with very high spatial resolution. Here, the authors report on data acquisition at increased spectral resolution, with 3-dimensional high spectral and spatial resolution MRI, in order to analyze subtle variations in water proton resonance frequency and lineshape that reflect local anatomy. The resulting information compliments previous studies based on T{sub 2}{sup *} and resonance frequency. Methods: The proton free induction decay was sampled at high resolution and Fourier transformed to produce a high-resolution water spectrum for each image voxel in a 3D volume. Data were acquired using a multigradient echo pulse sequence (i.e., echo-planar spectroscopic imaging) with a spatial resolution of 50 × 50 × 70 μm{sup 3} and spectral resolution of 3.5 Hz. Data were analyzed in the spectral domain, and images were produced from the various Fourier components of the water resonance. This allowed precise measurement of local variations in water resonance frequency and lineshape, at the expense of significantly increased run time (16–24 h). Results: High contrast T{sub 2}{sup *}-weighted images were produced from the peak of the water resonance (peak height image), revealing a high degree of anatomical detail, specifically in the hippocampus and cerebellum. In images produced from Fourier components of the water resonance at −7.0 Hz from the peak, the contrast between deep white matter tracts and the surrounding tissue is the reverse of the contrast in water peak height images. This indicates the presence of a shoulder in

  4. Dynamic diffraction-limited light-coupling of 3D-maneuvered wave-guided optical waveguides

    DEFF Research Database (Denmark)

    Villangca, Mark Jayson; Bañas, Andrew Rafael; Palima, Darwin;

    2014-01-01

    We have previously proposed and demonstrated the targeted-light delivery capability of wave-guided optical waveguides (WOWs). As the WOWs are maneuvered in 3D space, it is important to maintain efficient light coupling through the waveguides within their operating volume. We propose the use of...

  5. Web tools for large-scale 3D biological images and atlases

    Directory of Open Access Journals (Sweden)

    Husz Zsolt L

    2012-06-01

    Full Text Available Abstract Background Large-scale volumetric biomedical image data of three or more dimensions are a significant challenge for distributed browsing and visualisation. Many images now exceed 10GB which for most users is too large to handle in terms of computer RAM and network bandwidth. This is aggravated when users need to access tens or hundreds of such images from an archive. Here we solve the problem for 2D section views through archive data delivering compressed tiled images enabling users to browse through very-large volume data in the context of a standard web-browser. The system provides an interactive visualisation for grey-level and colour 3D images including multiple image layers and spatial-data overlay. Results The standard Internet Imaging Protocol (IIP has been extended to enable arbitrary 2D sectioning of 3D data as well a multi-layered images and indexed overlays. The extended protocol is termed IIP3D and we have implemented a matching server to deliver the protocol and a series of Ajax/Javascript client codes that will run in an Internet browser. We have tested the server software on a low-cost linux-based server for image volumes up to 135GB and 64 simultaneous users. The section views are delivered with response times independent of scale and orientation. The exemplar client provided multi-layer image views with user-controlled colour-filtering and overlays. Conclusions Interactive browsing of arbitrary sections through large biomedical-image volumes is made possible by use of an extended internet protocol and efficient server-based image tiling. The tools open the possibility of enabling fast access to large image archives without the requirement of whole image download and client computers with very large memory configurations. The system was demonstrated using a range of medical and biomedical image data extending up to 135GB for a single image volume.

  6. Research and Teaching: Methods for Creating and Evaluating 3D Tactile Images to Teach STEM Courses to the Visually Impaired

    Science.gov (United States)

    Hasper, Eric; Windhorst, Rogier; Hedgpeth, Terri; Van Tuyl, Leanne; Gonzales, Ashleigh; Martinez, Britta; Yu, Hongyu; Farkas, Zolton; Baluch, Debra P.

    2015-01-01

    Project 3D IMAGINE or 3D Image Arrays to Graphically Implement New Education is a pilot study that researches the effectiveness of incorporating 3D tactile images, which are critical for learning science, technology, engineering, and mathematics, into entry-level lab courses. The focus of this project is to increase the participation and…

  7. 3-D reconstruction of neurons from multichannel confocal laser scanning image series.

    Science.gov (United States)

    Wouterlood, Floris G

    2014-01-01

    A confocal laser scanning microscope (CLSM) collects information from a thin, focal plane and ignores out-of-focus information. Scanning of a specimen, with stepwise axial (Z-) movement of the stage in between each scan, produces Z-series of confocal images of a tissue volume, which then can be used to 3-D reconstruct structures of interest. The operator first configures separate channels (e.g., laser, filters, and detector settings) for each applied fluorochrome and then acquires Z-series of confocal images: one series per channel. Channel signal separation is extremely important. Measures to avoid bleaching are vital. Post-acquisition deconvolution of the image series is often performed to increase resolution before 3-D reconstruction takes place. In the 3-D reconstruction programs described in this unit, reconstructions can be inspected in real time from any viewing angle. By altering viewing angles and by switching channels off and on, the spatial relationships of 3-D-reconstructed structures with respect to structures visualized in other channels can be studied. Since each brand of CLSM, computer program, and 3-D reconstruction package has its own proprietary set of procedures, a general approach is provided in this protocol wherever possible.

  8. Volume estimation of tonsil phantoms using an oral camera with 3D imaging.

    Science.gov (United States)

    Das, Anshuman J; Valdez, Tulio A; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C; Raskar, Ramesh

    2016-04-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky's classification of tonsillar hypertrophy as well as intraoperative volume estimations.

  9. Volume estimation of tonsil phantoms using an oral camera with 3D imaging.

    Science.gov (United States)

    Das, Anshuman J; Valdez, Tulio A; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C; Raskar, Ramesh

    2016-04-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky's classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  10. 3-D reconstruction of neurons from multichannel confocal laser scanning image series.

    Science.gov (United States)

    Wouterlood, Floris G

    2014-01-01

    A confocal laser scanning microscope (CLSM) collects information from a thin, focal plane and ignores out-of-focus information. Scanning of a specimen, with stepwise axial (Z-) movement of the stage in between each scan, produces Z-series of confocal images of a tissue volume, which then can be used to 3-D reconstruct structures of interest. The operator first configures separate channels (e.g., laser, filters, and detector settings) for each applied fluorochrome and then acquires Z-series of confocal images: one series per channel. Channel signal separation is extremely important. Measures to avoid bleaching are vital. Post-acquisition deconvolution of the image series is often performed to increase resolution before 3-D reconstruction takes place. In the 3-D reconstruction programs described in this unit, reconstructions can be inspected in real time from any viewing angle. By altering viewing angles and by switching channels off and on, the spatial relationships of 3-D-reconstructed structures with respect to structures visualized in other channels can be studied. Since each brand of CLSM, computer program, and 3-D reconstruction package has its own proprietary set of procedures, a general approach is provided in this protocol wherever possible. PMID:24723320

  11. HOSVD-Based 3D Active Appearance Model: Segmentation of Lung Fields in CT Images.

    Science.gov (United States)

    Wang, Qingzhu; Kang, Wanjun; Hu, Haihui; Wang, Bin

    2016-07-01

    An Active Appearance Model (AAM) is a computer vision model which can be used to effectively segment lung fields in CT images. However, the fitting result is often inadequate when the lungs are affected by high-density pathologies. To overcome this problem, we propose a Higher-order Singular Value Decomposition (HOSVD)-based Three-dimensional (3D) AAM. An evaluation was performed on 310 diseased lungs form the Lung Image Database Consortium Image Collection. Other contemporary AAMs operate directly on patterns represented by vectors, i.e., before applying the AAM to a 3D lung volume,it has to be vectorized first into a vector pattern by some technique like concatenation. However, some implicit structural or local contextual information may be lost in this transformation. According to the nature of the 3D lung volume, HOSVD is introduced to represent and process the lung in tensor space. Our method can not only directly operate on the original 3D tensor patterns, but also efficiently reduce the computer memory usage. The evaluation resulted in an average Dice coefficient of 97.0 % ± 0.59 %, a mean absolute surface distance error of 1.0403 ± 0.5716 mm, a mean border positioning errors of 0.9187 ± 0.5381 pixel, and a Hausdorff Distance of 20.4064 ± 4.3855, respectively. Experimental results showed that our methods delivered significant and better segmentation results, compared with the three other model-based lung segmentation approaches, namely 3D Snake, 3D ASM and 3D AAM. PMID:27277277

  12. 3D Elastic Registration of Ultrasound Images Based on Skeleton Feature

    Institute of Scientific and Technical Information of China (English)

    LI Dan-dan; LIU Zhi-Yan; SHEN Yi

    2005-01-01

    In order to eliminate displacement and elastic deformation between images of adjacent frames in course of 3D ultrasonic image reconstruction, elastic registration based on skeleton feature was adopt in this paper. A new automatically skeleton tracking extract algorithm is presented, which can extract connected skeleton to express figure feature. Feature points of connected skeleton are extracted automatically by accounting topical curvature extreme points several times. Initial registration is processed according to barycenter of skeleton. Whereafter, elastic registration based on radial basis function are processed according to feature points of skeleton. Result of example demonstrate that according to traditional rigid registration, elastic registration based on skeleton feature retain natural difference in shape for organ's different part, and eliminate slight elastic deformation between frames caused by image obtained process simultaneously. This algorithm has a high practical value for image registration in course of 3D ultrasound image reconstruction.

  13. A web-based solution for 3D medical image visualization

    Science.gov (United States)

    Hou, Xiaoshuai; Sun, Jianyong; Zhang, Jianguo

    2015-03-01

    In this presentation, we present a web-based 3D medical image visualization solution which enables interactive large medical image data processing and visualization over the web platform. To improve the efficiency of our solution, we adopt GPU accelerated techniques to process images on the server side while rapidly transferring images to the HTML5 supported web browser on the client side. Compared to traditional local visualization solution, our solution doesn't require the users to install extra software or download the whole volume dataset from PACS server. By designing this web-based solution, it is feasible for users to access the 3D medical image visualization service wherever the internet is available.

  14. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Liu Li

    2013-01-01

    Full Text Available Speckle suppression plays an important role in improving ultrasound (US image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU- based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm.

  15. Biodynamic Doppler imaging of subcellular motion inside 3D living tissue culture and biopsies (Conference Presentation)

    Science.gov (United States)

    Nolte, David D.

    2016-03-01

    Biodynamic imaging is an emerging 3D optical imaging technology that probes up to 1 mm deep inside three-dimensional living tissue using short-coherence dynamic light scattering to measure the intracellular motions of cells inside their natural microenvironments. Biodynamic imaging is label-free and non-invasive. The information content of biodynamic imaging is captured through tissue dynamics spectroscopy that displays the changes in the Doppler signatures from intracellular constituents in response to applied compounds. The affected dynamic intracellular mechanisms include organelle transport, membrane undulations, cytoskeletal restructuring, strain at cellular adhesions, cytokinesis, mitosis, exo- and endo-cytosis among others. The development of 3D high-content assays such as biodynamic profiling can become a critical new tool for assessing efficacy of drugs and the suitability of specific types of tissue growth for drug discovery and development. The use of biodynamic profiling to predict clinical outcome of living biopsies to cancer therapeutics can be developed into a phenotypic companion diagnostic, as well as a new tool for therapy selection in personalized medicine. This invited talk will present an overview of the optical, physical and physiological processes involved in biodynamic imaging. Several different biodynamic imaging modalities include motility contrast imaging (MCI), tissue-dynamics spectroscopy (TDS) and tissue-dynamics imaging (TDI). A wide range of potential applications will be described that include process monitoring for 3D tissue culture, drug discovery and development, cancer therapy selection, embryo assessment for in-vitro fertilization and artificial reproductive technologies, among others.

  16. Adaptive optofluidic lens(es) for switchable 2D and 3D imaging

    Science.gov (United States)

    Huang, Hanyang; Wei, Kang; Zhao, Yi

    2016-03-01

    The stereoscopic image is often captured using dual cameras arranged side-by-side and optical path switching systems such as two separate solid lenses or biprism/mirrors. The miniaturization of the overall size of current stereoscopic devices down to several millimeters is at a sacrifice of further device size shrinkage. The limited light entry worsens the final image resolution and brightness. It is known that optofluidics offer good re-configurability for imaging systems. Leveraging this technique, we report a reconfigurable optofluidic system whose optical layout can be swapped between a singlet lens with 10 mm in diameter and a pair of binocular lenses with each lens of 3 mm in diameter for switchable two-dimensional (2D) and three-dimensional (3D) imaging. The singlet and the binoculars share the same optical path and the same imaging sensor. The singlet acquires a 3D image with better resolution and brightness, while the binoculars capture stereoscopic image pairs for 3D vision and depth perception. The focusing power tuning capability of the singlet and the binoculars enable image acquisition at varied object planes by adjusting the hydrostatic pressure across the lens membrane. The vari-focal singlet and binoculars thus work interchangeably and complementarily. The device is thus expected to have applications in robotic vision, stereoscopy, laparoendoscopy and miniaturized zoom lens system.

  17. 3D optical sectioning with a new hyperspectral confocal fluorescence imaging system.

    Energy Technology Data Exchange (ETDEWEB)

    Nieman, Linda T.; Sinclair, Michael B.; Davidson, George S.; Van Benthem, Mark Hilary; Haaland, David Michael; Timlin, Jerilyn Ann; Sasaki, Darryl Yoshio; Bachand, George David; Jones, Howland D. T.

    2007-02-01

    A novel hyperspectral fluorescence microscope for high-resolution 3D optical sectioning of cells and other structures has been designed, constructed, and used to investigate a number of different problems. We have significantly extended new multivariate curve resolution (MCR) data analysis methods to deconvolve the hyperspectral image data and to rapidly extract quantitative 3D concentration distribution maps of all emitting species. The imaging system has many advantages over current confocal imaging systems including simultaneous monitoring of numerous highly overlapped fluorophores, immunity to autofluorescence or impurity fluorescence, enhanced sensitivity, and dramatically improved accuracy, reliability, and dynamic range. Efficient data compression in the spectral dimension has allowed personal computers to perform quantitative analysis of hyperspectral images of large size without loss of image quality. We have also developed and tested software to perform analysis of time resolved hyperspectral images using trilinear multivariate analysis methods. The new imaging system is an enabling technology for numerous applications including (1) 3D composition mapping analysis of multicomponent processes occurring during host-pathogen interactions, (2) monitoring microfluidic processes, (3) imaging of molecular motors and (4) understanding photosynthetic processes in wild type and mutant Synechocystis cyanobacteria.

  18. Automated 3D-Objectdocumentation on the Base of an Image Set

    Directory of Open Access Journals (Sweden)

    Sebastian Vetter

    2011-12-01

    Full Text Available Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The

  19. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Directory of Open Access Journals (Sweden)

    Bashar Alsadik

    2014-03-01

    Full Text Available 3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC. Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction and the final accuracy of 1 mm.

  20. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  1. Joint Multichannel Motion Compensation Method for MIMO SAR 3D Imaging

    Directory of Open Access Journals (Sweden)

    Ze-min Yang

    2015-01-01

    Full Text Available The multiple-input-multiple-output (MIMO synthetic aperture radar (SAR system with a linear antenna array can obtain 3D resolution. In practice, it suffers from both the translational motion errors and the rotational motion errors. Conventional single-channel motion compensation methods could be used to compensate the motion errors channel by channel. However, this method might not be accurate enough for all the channels. What is more, the single-channel compensation may break the coherence among channels, which would cause defocusing and false targets. In this paper, both the translational motion errors and the rotational motion errors are discussed, and a joint multichannel motion compensation method is proposed for MIMO SAR 3D imaging. It is demonstrated through simulations that the proposed method exceeds the conventional methods in accuracy. And the final MIMO SAR 3D imaging simulation confirms the validity of the proposed algorithm.

  2. Computer-Assisted Hepatocellular Carcinoma Ablation Planning Based on 3-D Ultrasound Imaging.

    Science.gov (United States)

    Li, Kai; Su, Zhongzhen; Xu, Erjiao; Guan, Peishan; Li, Liu-Jun; Zheng, Rongqin

    2016-08-01

    To evaluate computer-assisted hepatocellular carcinoma (HCC) ablation planning based on 3-D ultrasound, 3-D ultrasound images of 60 HCC lesions from 58 patients were obtained and transferred to a research toolkit. Compared with virtual manual ablation planning (MAP), virtual computer-assisted ablation planning (CAP) consumed less time and needle insertion numbers and exhibited a higher rate of complete tumor coverage and lower rate of critical structure injury. In MAP, junior operators used less time, but had more critical structure injury than senior operators. For large lesions, CAP performed better than MAP. For lesions near critical structures, CAP resulted in better outcomes than MAP. Compared with MAP, CAP based on 3-D ultrasound imaging was more effective and achieved a higher rate of complete tumor coverage and a lower rate of critical structure injury; it is especially useful for junior operators and with large lesions, and lesions near critical structures. PMID:27126243

  3. High-resolution 3D X-ray imaging of intracranial nitinol stents

    Energy Technology Data Exchange (ETDEWEB)

    Snoeren, Rudolph M.; With, Peter H.N. de [Eindhoven University of Technology (TU/e), Faculty Electrical Engineering, Signal Processing Systems group (SPS), Eindhoven (Netherlands); Soederman, Michael [Karolinska University Hospital, Department of Neuroradiology, Stockholm (Sweden); Kroon, Johannes N.; Roijers, Ruben B.; Babic, Drazenko [Philips Healthcare, Best (Netherlands)

    2012-02-15

    To assess an optimized 3D imaging protocol for intracranial nitinol stents in 3D C-arm flat detector imaging. For this purpose, an image quality simulation and an in vitro study was carried out. Nitinol stents of various brands were placed inside an anthropomorphic head phantom, using iodine contrast. Experiments with objects were preceded by image quality and dose simulations. We varied X-ray imaging parameters in a commercially interventional X-ray system to set 3D image quality in the contrast-noise-sharpness space. Beam quality was varied to evaluate contrast of the stents while keeping absorbed dose below recommended values. Two detector formats were used, paired with an appropriate pixel size and X-ray focus size. Zoomed reconstructions were carried out and snapshot images acquired. High contrast spatial resolution was assessed with a CT phantom. We found an optimal protocol for imaging intracranial nitinol stents. Contrast resolution was optimized for nickel-titanium-containing stents. A high spatial resolution larger than 2.1 lp/mm allows struts to be visualized. We obtained images of stents of various brands and a representative set of images is shown. Independent of the make, struts can be imaged with virtually continuous strokes. Measured absorbed doses are shown to be lower than 50 mGy Computed Tomography Dose Index (CTDI). By balancing the modulation transfer of the imaging components and tuning the high-contrast imaging capabilities, we have shown that thin nitinol stent wires can be reconstructed with high contrast-to-noise ratio and good detail, while keeping radiation doses within recommended values. Experimental results compare well with imaging simulations. (orig.)

  4. Fast isotropic banding-free bSSFP imaging using 3D dynamically phase-cycled radial bSSFP (3D DYPR-SSFP)

    Energy Technology Data Exchange (ETDEWEB)

    Benkert, Thomas; Blaimer, Martin; Breuer, Felix A. [Research Center Magnetic Resonance Bavaria (MRB), Wuerzburg (Germany); Ehses, Philipp [Tuebingen Univ. (Germany). Dept. of Neuroimaging; Max Planck Institute for Biological Cybernetics, Tuebingen (Germany). High-Field MR Center; Jakob, Peter M. [Research Center Magnetic Resonance Bavaria (MRB), Wuerzburg (Germany); Wuerzburg Univ. (Germany). Dept. of Experimental Physics 5

    2016-05-01

    Aims: Dynamically phase-cycled radial balanced steady-state free precession (DYPR-SSFP) is a method for efficient banding artifact removal in bSSFP imaging. Based on a varying radiofrequency (RF) phase-increment in combination with a radial trajectory, DYPR-SSFP allows obtaining a banding-free image out of a single acquired k-space. The purpose of this work is to present an extension of this technique, enabling fast three-dimensional isotropic banding-free bSSFP imaging. Methods: While banding artifact removal with DYPR-SSFP relies on the applied dynamic phase-cycle, this aspect can lead to artifacts, at least when the number of acquired projections lies below a certain limit. However, by using a 3D radial trajectory with quasi-random view ordering for image acquisition, this problem is intrinsically solved, enabling 3D DYPR-SSFP imaging at or even below the Nyquist criterion. The approach is validated for brain and knee imaging at 3 Tesla. Results: Volumetric, banding-free images were obtained in clinically acceptable scan times with an isotropic resolution up to 0.56 mm. Conclusion: The combination of DYPR-SSFP with a 3D radial trajectory allows banding-free isotropic volumetric bSSFP imaging with no expense of scan time. Therefore, this is a promising candidate for clinical applications such as imaging of cranial nerves or articular cartilage.

  5. An active system for visually-guided reaching in 3D across binocular fixations.

    Science.gov (United States)

    Martinez-Martin, Ester; del Pobil, Angel P; Chessa, Manuela; Solari, Fabio; Sabatini, Silvio P

    2014-01-01

    Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data. PMID:24672295

  6. An active system for visually-guided reaching in 3D across binocular fixations.

    Science.gov (United States)

    Martinez-Martin, Ester; del Pobil, Angel P; Chessa, Manuela; Solari, Fabio; Sabatini, Silvio P

    2014-01-01

    Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity) generated from the egocentric representation of the visual information (image coordinates). In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching). The approach's performance is evaluated through experiments on both simulated and real data.

  7. An Active System for Visually-Guided Reaching in 3D across Binocular Fixations

    Directory of Open Access Journals (Sweden)

    Ester Martinez-Martin

    2014-01-01

    Full Text Available Based on the importance of relative disparity between objects for accurate hand-eye coordination, this paper presents a biological approach inspired by the cortical neural architecture. So, the motor information is coded in egocentric coordinates obtained from the allocentric representation of the space (in terms of disparity generated from the egocentric representation of the visual information (image coordinates. In that way, the different aspects of the visuomotor coordination are integrated: an active vision system, composed of two vergent cameras; a module for the 2D binocular disparity estimation based on a local estimation of phase differences performed through a bank of Gabor filters; and a robotic actuator to perform the corresponding tasks (visually-guided reaching. The approach’s performance is evaluated through experiments on both simulated and real data.

  8. 3D surface scan of biological samples with a Push-broom Imaging Spectrometer

    Science.gov (United States)

    Yao, Haibo; Kincaid, Russell; Hruska, Zuzana; Brown, Robert L.; Bhatnagar, Deepak; Cleveland, Thomas E.

    2013-08-01

    The food industry is always on the lookout for sensing technologies for rapid and nondestructive inspection of food products. Hyperspectral imaging technology integrates both imaging and spectroscopy into unique imaging sensors. Its application for food safety and quality inspection has made significant progress in recent years. Specifically, hyperspectral imaging has shown its potential for surface contamination detection in many food related applications. Most existing hyperspectral imaging systems use pushbroom scanning which is generally used for flat surface inspection. In some applications it is desirable to be able to acquire hyperspectral images on circular objects such as corn ears, apples, and cucumbers. Past research describes inspection systems that examine all surfaces of individual objects. Most of these systems did not employ hyperspectral imaging. These systems typically utilized a roller to rotate an object, such as an apple. During apple rotation, the camera took multiple images in order to cover the complete surface of the apple. The acquired image data lacked the spectral component present in a hyperspectral image. This paper discusses the development of a hyperspectral imaging system for a 3-D surface scan of biological samples. The new instrument is based on a pushbroom hyperspectral line scanner using a rotational stage to turn the sample. The system is suitable for whole surface hyperspectral imaging of circular objects. In addition to its value to the food industry, the system could be useful for other applications involving 3-D surface inspection.

  9. Measuring Femoral Torsion In Vivo Using Freehand 3-D Ultrasound Imaging.

    Science.gov (United States)

    Passmore, Elyse; Pandy, Marcus G; Graham, H Kerr; Sangeux, Morgan

    2016-02-01

    Despite variation in bone geometry, muscle and joint function is often investigated using generic musculoskeletal models. Patient-specific bone geometry can be obtained from computerised tomography, which involves ionising radiation, or magnetic resonance imaging (MRI), which is costly and time consuming. Freehand 3-D ultrasound provides an alternative to obtain bony geometry. The purpose of this study was to determine the accuracy and repeatability of 3-D ultrasound in measuring femoral torsion. Measurements of femoral torsion were performed on 10 healthy adults using MRI and 3-D ultrasound. Measurements of femoral torsion from 3-D ultrasound were, on average, smaller than those from MRI (mean difference = 1.8°; 95% confidence interval: -3.9°, 7.5°). MRI and 3-D ultrasound had Bland and Altman repeatability coefficients of 3.1° and 3.7°, respectively. Accurate measurements of femoral torsion were obtained with 3-D ultrasound offering the potential to acquire patient-specific bone geometry for musculoskeletal modelling. Three-dimensional ultrasound is non-invasive and relatively inexpensive and can be integrated into gait analysis.

  10. A new combined prior based reconstruction method for compressed sensing in 3D ultrasound imaging

    Science.gov (United States)

    Uddin, Muhammad S.; Islam, Rafiqul; Tahtali, Murat; Lambert, Andrew J.; Pickering, Mark R.

    2015-03-01

    Ultrasound (US) imaging is one of the most popular medical imaging modalities, with 3D US imaging gaining popularity recently due to its considerable advantages over 2D US imaging. However, as it is limited by long acquisition times and the huge amount of data processing it requires, methods for reducing these factors have attracted considerable research interest. Compressed sensing (CS) is one of the best candidates for accelerating the acquisition rate and reducing the data processing time without degrading image quality. However, CS is prone to introduce noise-like artefacts due to random under-sampling. To address this issue, we propose a combined prior-based reconstruction method for 3D US imaging. A Laplacian mixture model (LMM) constraint in the wavelet domain is combined with a total variation (TV) constraint to create a new regularization regularization prior. An experimental evaluation conducted to validate our method using synthetic 3D US images shows that it performs better than other approaches in terms of both qualitative and quantitative measures.

  11. Slanted channel microfluidic chip for 3D fluorescence imaging of cells in flow.

    Science.gov (United States)

    Jagannadh, Veerendra Kalyan; Mackenzie, Mark D; Pal, Parama; Kar, Ajoy K; Gorthi, Sai Siva

    2016-09-19

    Three-dimensional cellular imaging techniques have become indispensable tools in biological research and medical diagnostics. Conventional 3D imaging approaches employ focal stack collection to image different planes of the cell. In this work, we present the design and fabrication of a slanted channel microfluidic chip for 3D fluorescence imaging of cells in flow. The approach employs slanted microfluidic channels fabricated in glass using ultrafast laser inscription. The slanted nature of the microfluidic channels ensures that samples come into and go out of focus, as they pass through the microscope imaging field of view. This novel approach enables the collection of focal stacks in a straight-forward and automated manner, even with off-the-shelf microscopes that are not equipped with any motorized translation/rotation sample stages. The presented approach not only simplifies conventional focal stack collection, but also enhances the capabilities of a regular widefield fluorescence microscope to match the features of a sophisticated confocal microscope. We demonstrate the retrieval of sectioned slices of microspheres and cells, with the use of computational algorithms to enhance the signal-to-noise ratio (SNR) in the collected raw images. The retrieved sectioned images have been used to visualize fluorescent microspheres and bovine sperm cell nucleus in 3D while using a regular widefield fluorescence microscope. We have been able to achieve sectioning of approximately 200 slices per cell, which corresponds to a spatial translation of ∼ 15 nm per slice along the optical axis of the microscope.

  12. Live 3D image overlay for arterial duct closure with Amplatzer Duct Occluder II additional size.

    Science.gov (United States)

    Goreczny, Sebstian; Morgan, Gareth J; Dryzek, Pawel

    2016-03-01

    Despite several reports describing echocardiography for the guidance of ductal closure, two-dimensional angiography remains the mainstay imaging tool; three-dimensional rotational angiography has the potential to overcome some of the drawbacks of standard angiography, and reconstructed image overlay provides reliable guidance for device placement. We describe arterial duct closure solely from venous approach guided by live three-dimensional image overlay.

  13. Reproducibility study of 3D SSFP phase-based brain conductivity imaging

    NARCIS (Netherlands)

    Stehning, C.; Katscher, U.; Keupp, J.

    2012-01-01

    Noninvasive MR-based Electric Properties Tomography (EPT) forms a framework for an accurate determination of local SAR, and may providea diagnostic parameter in oncology. 3D SSFP sequences were found tobe a promising candidate for fast volumetric conductivity imaging. In this work, an in vivo study

  14. 3D imaging by serial block face scanning electron microscopy for materials science using ultramicrotomy.

    Science.gov (United States)

    Hashimoto, Teruo; Thompson, George E; Zhou, Xiaorong; Withers, Philip J

    2016-04-01

    Mechanical serial block face scanning electron microscopy (SBFSEM) has emerged as a means of obtaining three dimensional (3D) electron images over volumes much larger than possible by focused ion beam (FIB) serial sectioning and at higher spatial resolution than achievable with conventional X-ray computed tomography (CT). Such high resolution 3D electron images can be employed for precisely determining the shape, volume fraction, distribution and connectivity of important microstructural features. While soft (fixed or frozen) biological samples are particularly well suited for nanoscale sectioning using an ultramicrotome, the technique can also produce excellent 3D images at electron microscope resolution in a time and resource-efficient manner for engineering materials. Currently, a lack of appreciation of the capabilities of ultramicrotomy and the operational challenges associated with minimising artefacts for different materials is limiting its wider application to engineering materials. Consequently, this paper outlines the current state of the art for SBFSEM examining in detail how damage is introduced during slicing and highlighting strategies for minimising such damage. A particular focus of the study is the acquisition of 3D images for a variety of metallic and coated systems. PMID:26855205

  15. 3D space perception as embodied cognition in the history of art images

    Science.gov (United States)

    Tyler, Christopher W.

    2014-02-01

    Embodied cognition is a concept that provides a deeper understanding of the aesthetics of art images. This study considers the role of embodied cognition in the appreciation of 3D pictorial space, 4D action space, its extension through mirror reflection to embodied self-­-cognition, and its relation to the neuroanatomical organization of the aesthetic response.

  16. Improving Low-dose Cardiac CT Images based on 3D Sparse Representation

    Science.gov (United States)

    Shi, Luyao; Hu, Yining; Chen, Yang; Yin, Xindao; Shu, Huazhong; Luo, Limin; Coatrieux, Jean-Louis

    2016-03-01

    Cardiac computed tomography (CCT) is a reliable and accurate tool for diagnosis of coronary artery diseases and is also frequently used in surgery guidance. Low-dose scans should be considered in order to alleviate the harm to patients caused by X-ray radiation. However, low dose CT (LDCT) images tend to be degraded by quantum noise and streak artifacts. In order to improve the cardiac LDCT image quality, a 3D sparse representation-based processing (3D SR) is proposed by exploiting the sparsity and regularity of 3D anatomical features in CCT. The proposed method was evaluated by a clinical study of 14 patients. The performance of the proposed method was compared to the 2D spares representation-based processing (2D SR) and the state-of-the-art noise reduction algorithm BM4D. The visual assessment, quantitative assessment and qualitative assessment results show that the proposed approach can lead to effective noise/artifact suppression and detail preservation. Compared to the other two tested methods, 3D SR method can obtain results with image quality most close to the reference standard dose CT (SDCT) images.

  17. MONO-PULSE RADAR 3-D IMAGING TECHNIQUES FOR TARGET IN STEPPED TRACKING MODE

    Institute of Scientific and Technical Information of China (English)

    Zhang Tao; Ma Changzheng; Zhang Qun; Zhang Shouhong

    2002-01-01

    A method for mono-pulse radar 3-D imaging in stepped tracking mode is presented and the amplitude linear modulation of error signals in stepped tracking mode is analyzed with its compensation method followed, so the problem of precisely tracking of target is solved. Finally the validity of these methods is proven by the simulation results.

  18. Multi-detector CT and 3D imaging in a multi-vendor PACS environment

    NARCIS (Netherlands)

    van Ooijen, PMA; Witkamp, R; Oudkerk, M; Lemke, HU; Inamura, K; Doi, K; Vannier, MW; Farman, AG; Reiber, JHC

    2003-01-01

    Introduction of new hard- and software techniques like Multi-Dectector Computed Tomography (MDCT) and 3D imaging has put new demands on the Picture Archiving and Communications System (PACS) environment within the radiology department. The daily use of these new techniques requires a good integratio

  19. Exploring 2D/3D input techniques for medical image analysis

    NARCIS (Netherlands)

    E.V. Zudilova-Seinstra; P.M.A. Sloot; P.J.H. de Koning; A. Suinesiaputra; R.J. van der Geest; J.H.C. Reiber

    2009-01-01

    We describe a series of experiments that compared the 2D and 3D input methods for selection and positioning tasks related to medical image analysis. For this study, we chose a switchable P5 glove controller, which can be used to provide both 2DOF and 6DOF input control. Our results suggest that for

  20. Evaluation of 2D and 3D glove input applied to medical image analysis

    NARCIS (Netherlands)

    E.V. Zudilova-Seinstra; P.J.H. de Koning; A. Suinesiaputra; B.W. van Schooten; R.J. van der Geest; J.H.C. Reiber; P.M.A. Sloot

    2010-01-01

    We describe a series of experiments that compared 2D/3D input methods for selection and positioning tasks related to medical image analysis. For our study, we chose a switchable P5 Glove Controller, which can be used to provide both 2DOF and 6DOF input control. Our results suggest that for both task

  1. 3D imaging by serial block face scanning electron microscopy for materials science using ultramicrotomy.

    Science.gov (United States)

    Hashimoto, Teruo; Thompson, George E; Zhou, Xiaorong; Withers, Philip J

    2016-04-01

    Mechanical serial block face scanning electron microscopy (SBFSEM) has emerged as a means of obtaining three dimensional (3D) electron images over volumes much larger than possible by focused ion beam (FIB) serial sectioning and at higher spatial resolution than achievable with conventional X-ray computed tomography (CT). Such high resolution 3D electron images can be employed for precisely determining the shape, volume fraction, distribution and connectivity of important microstructural features. While soft (fixed or frozen) biological samples are particularly well suited for nanoscale sectioning using an ultramicrotome, the technique can also produce excellent 3D images at electron microscope resolution in a time and resource-efficient manner for engineering materials. Currently, a lack of appreciation of the capabilities of ultramicrotomy and the operational challenges associated with minimising artefacts for different materials is limiting its wider application to engineering materials. Consequently, this paper outlines the current state of the art for SBFSEM examining in detail how damage is introduced during slicing and highlighting strategies for minimising such damage. A particular focus of the study is the acquisition of 3D images for a variety of metallic and coated systems.

  2. Effects of CT image segmentation methods on the accuracy of long bone 3D reconstructions.

    Science.gov (United States)

    Rathnayaka, Kanchana; Sahama, Tony; Schuetz, Michael A; Schmutz, Beat

    2011-03-01

    An accurate and accessible image segmentation method is in high demand for generating 3D bone models from CT scan data, as such models are required in many areas of medical research. Even though numerous sophisticated segmentation methods have been published over the years, most of them are not readily available to the general research community. Therefore, this study aimed to quantify the accuracy of three popular image segmentation methods, two implementations of intensity thresholding and Canny edge detection, for generating 3D models of long bones. In order to reduce user dependent errors associated with visually selecting a threshold value, we present a new approach of selecting an appropriate threshold value based on the Canny filter. A mechanical contact scanner in conjunction with a microCT scanner was utilised to generate the reference models for validating the 3D bone models generated from CT data of five intact ovine hind limbs. When the overall accuracy of the bone model is considered, the three investigated segmentation methods generated comparable results with mean errors in the range of 0.18-0.24 mm. However, for the bone diaphysis, Canny edge detection and Canny filter based thresholding generated 3D models with a significantly higher accuracy compared to those generated through visually selected thresholds. This study demonstrates that 3D models with sub-voxel accuracy can be generated utilising relatively simple segmentation methods that are available to the general research community.

  3. 3D automatic liver segmentation using feature-constrained Mahalanobis distance in CT images.

    Science.gov (United States)

    Salman Al-Shaikhli, Saif Dawood; Yang, Michael Ying; Rosenhahn, Bodo

    2016-08-01

    Automatic 3D liver segmentation is a fundamental step in the liver disease diagnosis and surgery planning. This paper presents a novel fully automatic algorithm for 3D liver segmentation in clinical 3D computed tomography (CT) images. Based on image features, we propose a new Mahalanobis distance cost function using an active shape model (ASM). We call our method MD-ASM. Unlike the standard active shape model (ST-ASM), the proposed method introduces a new feature-constrained Mahalanobis distance cost function to measure the distance between the generated shape during the iterative step and the mean shape model. The proposed Mahalanobis distance function is learned from a public database of liver segmentation challenge (MICCAI-SLiver07). As a refinement step, we propose the use of a 3D graph-cut segmentation. Foreground and background labels are automatically selected using texture features of the learned Mahalanobis distance. Quantitatively, the proposed method is evaluated using two clinical 3D CT scan databases (MICCAI-SLiver07 and MIDAS). The evaluation of the MICCAI-SLiver07 database is obtained by the challenge organizers using five different metric scores. The experimental results demonstrate the availability of the proposed method by achieving an accurate liver segmentation compared to the state-of-the-art methods. PMID:26501155

  4. Correction of a Depth-Dependent Lateral Distortion in 3D Super-Resolution Imaging.

    Directory of Open Access Journals (Sweden)

    Lina Carlini

    Full Text Available Three-dimensional (3D localization-based super-resolution microscopy (SR requires correction of aberrations to accurately represent 3D structure. Here we show how a depth-dependent lateral shift in the apparent position of a fluorescent point source, which we term `wobble`, results in warped 3D SR images and provide a software tool to correct this distortion. This system-specific, lateral shift is typically > 80 nm across an axial range of ~ 1 μm. A theoretical analysis based on phase retrieval data from our microscope suggests that the wobble is caused by non-rotationally symmetric phase and amplitude aberrations in the microscope's pupil function. We then apply our correction to the bacterial cytoskeletal protein FtsZ in live bacteria and demonstrate that the corrected data more accurately represent the true shape of this vertically-oriented ring-like structure. We also include this correction method in a registration procedure for dual-color, 3D SR data and show that it improves target registration error (TRE at the axial limits over an imaging depth of 1 μm, yielding TRE values of < 20 nm. This work highlights the importance of correcting aberrations in 3D SR to achieve high fidelity between the measurements and the sample.

  5. 3D didactic model and useful guide of the semicircular conducts Modelo didático 3D e guia útil dos canais semicirculares

    Directory of Open Access Journals (Sweden)

    Ricardo D'Albora Rivas

    2011-06-01

    Full Text Available Knowledge of the anatomy and physiology of the semicircular canals and their central pathways is essential for the diagnosis of vestibular pathology. This 3 dimensional (3D scheme of the Semicircular Canals (SSCC is a teaching tool and a useful reference guide for rapid consultation. MATERIAL AND METHODS: A multicolored cardboard model is accompanied by a user manual which provides a thorough description of the tool for the most common vestibular diseases. RESULTS: Although results cannot be quantitatively assessed, the model has been well received at several Latin American scientific conferences. The model is often understood with verbal instruction only; nevertheless, a printed user manual is included. CONCLUSIONS: This 3 dimensional (3D model of the Semicircular Canals (SSCC is a practical, low cost tool for use in private and academic settings.A identificação de determinadas afecções vestibulares exige conhecimento prévio sobre anatomia e fisiologia dos canais semicirculares (CSC e de suas conexões centrais, que apresentam complexidade anatômica tridimensional e funcional. OBJETIVO: Propor um modelo anatômico e funcional dos CSC, em 3 dimensões (3D, para servir como uma ferramenta didática e um guia útil de consulta rápida. MATERIAL E MÉTODOS: O modelo é projetado em cartão, com impressão em cores diferentes, acompanhados de um texto explicativo de 22 folhas, que detalha sua descrição topográfica, descritiva e sua utilização com base em exemplos das doenças vestibulares mais frequentes. RESULTADOS: Embora os resultados não possam ser avaliados numericamente, este modelo já foi compreendido por diversos especialistas e tem sido bastante utilizado por eles. Além disso, o produto deste trabalho já foi apresentado em diferentes eventos científicos latino-americanos com excelente aceitação. CONCLUSÃO: Trata-se de ferramenta útil e de baixo custo para o ensino, a prática clínica diária em otoneurologia.

  6. Quantitative roughness characterization and 3D reconstruction of electrode surface using cyclic voltammetry and SEM image

    Energy Technology Data Exchange (ETDEWEB)

    Dhillon, Shweta; Kant, Rama, E-mail: rkant@chemistry.du.ac.in

    2013-10-01

    Area measurements from cyclic voltammetry (CV) and image from scanning electron microscopy (SEM) are used to characterize electrode statistical morphology, 3D surface reconstruction and its electroactivity. SEM images of single phased materials correspond to two-dimensional (2D) projections of 3D structures, leading to an incomplete characterization. Lack of third dimension information in SEM image is circumvented using equivalence between denoised SEM image and CV area measurements. This CV-SEM method can be used to estimate power spectral density (PSD), width, gradient, finite fractal nature of roughness and local morphology of the electrode. We show that the surface morphological statistical property like distribution function of gradient can be related to local electro-activity. Electrode surface gradient micrographs generated here can provide map of electro-activity sites. Finally, the densely and uniformly packed small gradient over the Pt-surface is the determining criterion for high intrinsic electrode activity.

  7. Review of 3D image data calibration for heterogeneity correction in proton therapy treatment planning.

    Science.gov (United States)

    Zhu, Jiahua; Penfold, Scott N

    2016-06-01

    Correct modelling of the interaction parameters of patient tissues is of vital importance in proton therapy treatment planning because of the large dose gradients associated with the Bragg peak. Different 3D imaging techniques yield different information regarding these interaction parameters. Given the rapidly expanding interest in proton therapy, this review is written to make readers aware of the current challenges in accounting for tissue heterogeneities and the imaging systems that are proposed to tackle these challenges. A summary of the interaction parameters of interest in proton therapy and the current and developmental 3D imaging techniques used in proton therapy treatment planning is given. The different methods to translate the imaging data to the interaction parameters of interest are reviewed and a summary of the implementations in several commercial treatment planning systems is presented. PMID:27115163

  8. Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy

    Directory of Open Access Journals (Sweden)

    Emilio J Gualda

    2014-08-01

    Full Text Available The development of three dimensional cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex three dimensional matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy is becoming an excellent tool for fast imaging of such three-dimensional biological structures. We demonstrate the potential of this technique for the imaging of human differentiated 3D neural aggregates in fixed and live samples, namely calcium imaging and cell death processes, showing the power of imaging modality compared with traditional microscopy. The combination of light sheet microscopy and 3D neural cultures will open the door to more challenging experiments involving drug testing at large scale as well as a better understanding of relevant biological processes in a more realistic environment.

  9. Structured light 3D tracking system for measuring motions in PET brain imaging

    DEFF Research Database (Denmark)

    Olesen, Oline Vinter; Jørgensen, Morten Rudkjær; Paulsen, Rasmus Reinhold;

    2010-01-01

    with a DLP projector and a CCD camera is set up on a model of the High Resolution Research Tomograph (HRRT). Methods to reconstruct 3D point clouds of simple surfaces based on phase-shifting interferometry (PSI) are demonstrated. The projector and camera are calibrated using a simple stereo vision procedure......Patient motion during scanning deteriorates image quality, especially for high resolution PET scanners. A new proposal for a 3D head tracking system for motion correction in high resolution PET brain imaging is set up and demonstrated. A prototype tracking system based on structured light...... where the projector is treated as a camera. Additionally, the surface reconstructions are corrected for the non-linear projector output prior to image capture. The results are convincing and a first step toward a fully automated tracking system for measuring head motions in PET imaging...

  10. 3D synthetic aperture imaging using a virtual source element in the elevation plane

    DEFF Research Database (Denmark)

    Nikolov, Svetoslav; Jensen, Jørgen Arendt

    2000-01-01

    . However, the resolution in the elevation plane is determined by the fixed mechanical elevation focus. This paper suggests to post-focus the RF lines from several adjacent planes in the elevation direction using the elevation focal point of the transducer as a virtual source element, in order to obtain......The conventional scanning techniques are not directly extendable for 3D real-time imaging because of the time necessary to acquire one volume. Using a linear array and synthetic transmit aperture, the volume can be scanned plane by plane. Up to 1000 planes per second can be scanned for a typical...... scan depth of 15 cm and speed of sound of 1540 m/s. Only 70 to 90 planes must be acquired per volume, making this method suitable for real-time 3D imaging without compromising the image quality. The resolution in the azimuthal plane has the quality of a dynamically focused image in transmit and receive...

  11. 3D structural analysis of proteins using electrostatic surfaces based on image segmentation

    Science.gov (United States)

    Vlachakis, Dimitrios; Champeris Tsaniras, Spyridon; Tsiliki, Georgia; Megalooikonomou, Vasileios; Kossida, Sophia

    2016-01-01

    Herein, we present a novel strategy to analyse and characterize proteins using protein molecular electro-static surfaces. Our approach starts by calculating a series of distinct molecular surfaces for each protein that are subsequently flattened out, thus reducing 3D information noise. RGB images are appropriately scaled by means of standard image processing techniques whilst retaining the weight information of each protein’s molecular electrostatic surface. Then homogeneous areas in the protein surface are estimated based on unsupervised clustering of the 3D images, while performing similarity searches. This is a computationally fast approach, which efficiently highlights interesting structural areas among a group of proteins. Multiple protein electrostatic surfaces can be combined together and in conjunction with their processed images, they can provide the starting material for protein structural similarity and molecular docking experiments.

  12. 3D imaging with an isocentric mobile C-arm. Comparison of image quality with spiral CT

    Energy Technology Data Exchange (ETDEWEB)

    Kotsianos, Dorothea; Wirth, Stefan; Fischer, Tanja; Euler, Ekkehard; Rock, Clemens; Linsenmaier, Ulrich; Pfeifer, Klaus Juergen; Reiser, Maximilian [Departments of Radiology and Surgery, Klinikum der Universitaet Muenchen, Innenstadt, Nussbaumstrasse 20, 80336, Munchen (Germany)

    2004-09-01

    The purpose of this study was to evaluate the image quality of the new 3D imaging system (ISO-C-3D) for osteosyntheses of tibial condylar fractures in comparison with spiral CT (CT). Sixteen human cadaveric knees were examined with a C-arm 3D imaging system and spiral computed tomography. Various screws and plates of steel and titanium were used for osteosynthesis in these specimens. Image quality and clinical value of multiplanar (MP) reformatting of both methods were analyzed. In addition, five patients with tibial condylar fractures were examined for diagnosis and intra-operative control. The image quality of the C-arm 3D imaging system in the cadaveric study was rated as significantly worse than that of spiral CT with and without prostheses. After implantation of prostheses an increased incidence of artifacts was observed, but the diagnostic accuracy was not affected. Titanium implants caused the smallest number of artifacts. The image quality of ISO-C is inferior to CT, and metal artifacts were more prominent, but the clinical value was equal. ISO-C-3D can be useful in planning operative reconstructions and can verify the reconstruction of articular surfaces and the position of implants with diagnostic image quality. (orig.)

  13. Filters in 2D and 3D Cardiac SPECT Image Processing

    Directory of Open Access Journals (Sweden)

    Maria Lyra

    2014-01-01

    Full Text Available Nuclear cardiac imaging is a noninvasive, sensitive method providing information on cardiac structure and physiology. Single photon emission tomography (SPECT evaluates myocardial perfusion, viability, and function and is widely used in clinical routine. The quality of the tomographic image is a key for accurate diagnosis. Image filtering, a mathematical processing, compensates for loss of detail in an image while reducing image noise, and it can improve the image resolution and limit the degradation of the image. SPECT images are then reconstructed, either by filter back projection (FBP analytical technique or iteratively, by algebraic methods. The aim of this study is to review filters in cardiac 2D, 3D, and 4D SPECT applications and how these affect the image quality mirroring the diagnostic accuracy of SPECT images. Several filters, including the Hanning, Butterworth, and Parzen filters, were evaluated in combination with the two reconstruction methods as well as with a specified MatLab program. Results showed that for both 3D and 4D cardiac SPECT the Butterworth filter, for different critical frequencies and orders, produced the best results. Between the two reconstruction methods, the iterative one might be more appropriate for cardiac SPECT, since it improves lesion detectability due to the significant improvement of image contrast.

  14. Automatic multi-image photo texturing of complex 3D scenes

    OpenAIRE

    Alshawabkeh, Yahya; Haala, Norbert

    2005-01-01

    The paper presents an approach for projective texture mapping from photographs onto triangulated surfaces from 3D laser scanning. By these means, the effort to generate photo-realistic models of complex shaped objects can be reduced considerably. The images are collected from multiple viewpoints, which do not necessarily correspond to the viewpoints of LIDAR data collection. In order to handle the resulting problem of occlusions, the visibility of the model areas in the respective images has ...

  15. Adobe Flash 11 Stage3D (Molehill) Game Programming Beginner's Guide

    CERN Document Server

    Kaitila, Christer

    2011-01-01

    Written in an informal and friendly manner, the style and approach of this book will take you on an exciting adventure. Piece by piece, detailed examples help you along the way by providing real-world game code required to make a complete 3D video game. Each chapter builds upon the experience and achievements earned in the last, culminating in the ultimate prize - your game! If you ever wanted to make your own 3D game in Flash, then this book is for you. This book is a perfect introduction to 3D game programming in Adobe Molehill for complete beginners. You do not need to know anything about S

  16. Ultra-wide-band 3D microwave imaging scanner for the detection of concealed weapons

    Science.gov (United States)

    Rezgui, Nacer-Ddine; Andrews, David A.; Bowring, Nicholas J.

    2015-10-01

    The threat of concealed weapons, explosives and contraband in footwear, bags and suitcases has led to the development of new devices, which can be deployed for security screening. To address known deficiencies of metal detectors and x-rays, an UWB 3D microwave imaging scanning apparatus using FMCW stepped frequency working in the K and Q bands and with a planar scanning geometry based on an x y stage, has been developed to screen suspicious luggage and footwear. To obtain microwave images of the concealed weapons, the targets are placed above the platform and the single transceiver horn antenna attached to the x y stage is moved mechanically to perform a raster scan to create a 2D synthetic aperture array. The S11 reflection signal of the transmitted sweep frequency from the target is acquired by a VNA in synchronism with each position step. To enhance and filter from clutter and noise the raw data and to obtain the 2D and 3D microwave images of the concealed weapons or explosives, data processing techniques are applied to the acquired signals. These techniques include background subtraction, Inverse Fast Fourier Transform (IFFT), thresholding, filtering by gating and windowing and deconvolving with the transfer function of the system using a reference target. To focus the 3D reconstructed microwave image of the target in range and across the x y aperture without using focusing elements, 3D Synthetic Aperture Radar (SAR) techniques are applied to the post-processed data. The K and Q bands, between 15 to 40 GHz, show good transmission through clothing and dielectric materials found in luggage and footwear. A description of the system, algorithms and some results with replica guns and a comparison of microwave images obtained by IFFT, 2D and 3D SAR techniques are presented.

  17. Using a wireless motion controller for 3D medical image catheter interactions

    Science.gov (United States)

    Vitanovski, Dime; Hahn, Dieter; Daum, Volker; Hornegger, Joachim

    2009-02-01

    State-of-the-art morphological imaging techniques usually provide high resolution 3D images with a huge number of slices. In clinical practice, however, 2D slice-based examinations are still the method of choice even for these large amounts of data. Providing intuitive interaction methods for specific 3D medical visualization applications is therefore a critical feature for clinical imaging applications. For the domain of catheter navigation and surgery planning, it is crucial to assist the physician with appropriate visualization techniques, such as 3D segmentation maps, fly-through cameras or virtual interaction approaches. There has been an ongoing development and improvement for controllers that help to interact with 3D environments in the domain of computer games. These controllers are based on both motion and infrared sensors and are typically used to detect 3D position and orientation. We have investigated how a state-of-the-art wireless motion sensor controller (Wiimote), developed by Nintendo, can be used for catheter navigation and planning purposes. By default the Wiimote controller only measure rough acceleration over a range of +/- 3g with 10% sensitivity and orientation. Therefore, a pose estimation algorithm was developed for computing accurate position and orientation in 3D space regarding 4 Infrared LEDs. Current results show that for the translation it is possible to obtain a mean error of (0.38cm, 0.41cm, 4.94cm) and for the rotation (0.16, 0.28) respectively. Within this paper we introduce a clinical prototype that allows steering of a virtual fly-through camera attached to the catheter tip by the Wii controller on basis of a segmented vessel tree.

  18. Fast imaging of laboratory core floods using 3D compressed sensing RARE MRI.

    Science.gov (United States)

    Ramskill, N P; Bush, I; Sederman, A J; Mantle, M D; Benning, M; Anger, B C; Appel, M; Gladden, L F

    2016-09-01

    Three-dimensional (3D) imaging of the fluid distributions within the rock is essential to enable the unambiguous interpretation of core flooding data. Magnetic resonance imaging (MRI) has been widely used to image fluid saturation in rock cores; however, conventional acquisition strategies are typically too slow to capture the dynamic nature of the displacement processes that are of interest. Using Compressed Sensing (CS), it is possible to reconstruct a near-perfect image from significantly fewer measurements than was previously thought necessary, and this can result in a significant reduction in the image acquisition times. In the present study, a method using the Rapid Acquisition with Relaxation Enhancement (RARE) pulse sequence with CS to provide 3D images of the fluid saturation in rock core samples during laboratory core floods is demonstrated. An objective method using image quality metrics for the determination of the most suitable regularisation functional to be used in the CS reconstructions is reported. It is shown that for the present application, Total Variation outperforms the Haar and Daubechies3 wavelet families in terms of the agreement of their respective CS reconstructions with a fully-sampled reference image. Using the CS-RARE approach, 3D images of the fluid saturation in the rock core have been acquired in 16min. The CS-RARE technique has been applied to image the residual water saturation in the rock during a water-water displacement core flood. With a flow rate corresponding to an interstitial velocity of vi=1.89±0.03ftday(-1), 0.1 pore volumes were injected over the course of each image acquisition, a four-fold reduction when compared to a fully-sampled RARE acquisition. Finally, the 3D CS-RARE technique has been used to image the drainage of dodecane into the water-saturated rock in which the dynamics of the coalescence of discrete clusters of the non-wetting phase are clearly observed. The enhancement in the temporal resolution that has

  19. Fast imaging of laboratory core floods using 3D compressed sensing RARE MRI

    Science.gov (United States)

    Ramskill, N. P.; Bush, I.; Sederman, A. J.; Mantle, M. D.; Benning, M.; Anger, B. C.; Appel, M.; Gladden, L. F.

    2016-09-01

    Three-dimensional (3D) imaging of the fluid distributions within the rock is essential to enable the unambiguous interpretation of core flooding data. Magnetic resonance imaging (MRI) has been widely used to image fluid saturation in rock cores; however, conventional acquisition strategies are typically too slow to capture the dynamic nature of the displacement processes that are of interest. Using Compressed Sensing (CS), it is possible to reconstruct a near-perfect image from significantly fewer measurements than was previously thought necessary, and this can result in a significant reduction in the image acquisition times. In the present study, a method using the Rapid Acquisition with Relaxation Enhancement (RARE) pulse sequence with CS to provide 3D images of the fluid saturation in rock core samples during laboratory core floods is demonstrated. An objective method using image quality metrics for the determination of the most suitable regularisation functional to be used in the CS reconstructions is reported. It is shown that for the present application, Total Variation outperforms the Haar and Daubechies3 wavelet families in terms of the agreement of their respective CS reconstructions with a fully-sampled reference image. Using the CS-RARE approach, 3D images of the fluid saturation in the rock core have been acquired in 16 min. The CS-RARE technique has been applied to image the residual water saturation in the rock during a water-water displacement core flood. With a flow rate corresponding to an interstitial velocity of vi = 1.89 ± 0.03 ft day-1, 0.1 pore volumes were injected over the course of each image acquisition, a four-fold reduction when compared to a fully-sampled RARE acquisition. Finally, the 3D CS-RARE technique has been used to image the drainage of dodecane into the water-saturated rock in which the dynamics of the coalescence of discrete clusters of the non-wetting phase are clearly observed. The enhancement in the temporal resolution

  20. Sample based 3D face reconstruction from a single frontal image by adaptive locally linear embedding

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jian; ZHUANG Yue-ting

    2007-01-01

    In this paper, we propose a highly automatic approach for 3D photorealistic face reconstruction from a single frontal image. The key point of our work is the implementation of adaptive manifold learning approach. Beforehand, an active appearance model (AAM) is trained for automatic feature extraction and adaptive locally linear embedding (ALLE) algorithm is utilized to reduce the dimensionality of the 3D database. Then, given an input frontal face image, the corresponding weights between 3D samples and the image are synthesized adaptively according to the AAM selected facial features. Finally, geometry reconstruction is achieved by linear weighted combination of adaptively selected samples. Radial basis function (RBF) is adopted to map facial texture from the frontal image to the reconstructed face geometry. The texture of invisible regions between the face and the ears is interpolated by sampling from the frontal image. This approach has several advantages: (1) Only a single frontal face image is needed for highly automatic face reconstruction; (2) Compared with former works, our reconstruction approach provides higher accuracy; (3) Constraint based RBF texture mapping provides natural appearance for reconstructed face.

  1. Generic camera model and its calibration for computational integral imaging and 3D reconstruction.

    Science.gov (United States)

    Li, Weiming; Li, Youfu

    2011-03-01

    Integral imaging (II) is an important 3D imaging technology. To reconstruct 3D information of the viewed objects, modeling and calibrating the optical pickup process of II are necessary. This work focuses on the modeling and calibration of an II system consisting of a lenslet array, an imaging lens, and a charge-coupled device camera. Most existing work on such systems assumes a pinhole array model (PAM). In this work, we explore a generic camera model that accommodates more generality. This model is an empirical model based on measurements, and we constructed a setup for its calibration. Experimental results show a significant difference between the generic camera model and the PAM. Images of planar patterns and 3D objects were computationally reconstructed with the generic camera model. Compared with the images reconstructed using the PAM, the images present higher fidelity and preserve more high spatial frequency components. To the best of our knowledge, this is the first attempt in applying a generic camera model to an II system.

  2. SU-E-J-45: The Correlation Between CBCT Flat Panel Misalignment and 3D Image Guidance Accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Kenton, O; Valdes, G; Yin, L; Teo, B [The Hospital of the University of Pennsylvania, Philadelphia, PA (United States); Brousmiche, S; Wikler, D [Ion Beam Application, Louvain-la-neuve (Belgium)

    2015-06-15

    Purpose To simulate the impact of CBCT flat panel misalignment on the image quality, the calculated correction vectors in 3D image guided proton therapy and to determine if these calibration errors can be caught in our QA process. Methods The X-ray source and detector geometrical calibration (flexmap) file of the CBCT system in the AdaPTinsight software (IBA proton therapy) was edited to induce known changes in the rotational and translational calibrations of the imaging panel. Translations of up to ±10 mm in the x, y and z directions (see supplemental) and rotational errors of up to ±3° were induced. The calibration files were then used to reconstruct the CBCT image of a pancreatic patient and CatPhan phantom. Correction vectors were calculated for the patient using the software’s auto match system and compared to baseline values. The CatPhan CBCT images were used for quantitative evaluation of image quality for each type of induced error. Results Translations of 1 to 3 mm in the x and y calibration resulted in corresponding correction vector errors of equal magnitude. Similar 10mm shifts were seen in the y-direction; however, in the x-direction, the image quality was too degraded for a match. These translational errors can be identified through differences in isocenter from orthogonal kV images taken during routine QA. Errors in the z-direction had no effect on the correction vector and image quality.Rotations of the imaging panel calibration resulted in corresponding correction vector rotations of the patient images. These rotations also resulted in degraded image quality which can be identified through quantitative image quality metrics. Conclusion Misalignment of CBCT geometry can lead to incorrect translational and rotational patient correction vectors. These errors can be identified through QA of the imaging isocenter as compared to orthogonal images combined with monitoring of CBCT image quality.

  3. Hand-guided 3D surface acquisition by combining simple light sectioning with real-time algorithms

    CERN Document Server

    Arold, Oliver; Willomitzer, Florian; Häusler, Gerd

    2014-01-01

    Precise 3D measurements of rigid surfaces are desired in many fields of application like quality control or surgery. Often, views from all around the object have to be acquired for a full 3D description of the object surface. We present a sensor principle called "Flying Triangulation" which avoids an elaborate "stop-and-go" procedure. It combines a low-cost classical light-section sensor with an algorithmic pipeline. A hand-guided sensor captures a continuous movie of 3D views while being moved around the object. The views are automatically aligned and the acquired 3D model is displayed in real time. In contrast to most existing sensors no bandwidth is wasted for spatial or temporal encoding of the projected lines. Nor is an expensive color camera necessary for 3D acquisition. The achievable measurement uncertainty and lateral resolution of the generated 3D data is merely limited by physics. An alternating projection of vertical and horizontal lines guarantees the existence of corresponding points in successi...

  4. Influence of 3D Guide Vanes on the Channel Vortices in the Runner of a Francis Turbine

    Science.gov (United States)

    Liu, Shuhong; Zhang, Liang; Wu, Yulin; Luo, Xianwu; Nishi, Michihiro

    It is known that the pressure fluctuation in the runner will become large when a Francis turbine operates at the low flow rate and high head. One of the reasons is the occurrence of channel vortices, which are caused by the three-dimensional flow separating from the suction side of runner blades. In this study, two three-dimensional guide vanes (1# 3D GV, 2# 3D GV) are designed so as to depress the channel vortices and improve the operation performance of a Francis turbine. The flow rate equation for 3D GV is derived firstly in this paper. Then, in order to show the influence of 3D GV on the turbine characteristic, performance test and video-recording of the channel vortices are conducted. Numerical simulation for a part load operation is applied finally to the entire turbine flow passage (from the inlet of spiral case to the outlet of draft tube) using the RNG k-ɛ turbulence model to make clear the three-dimensional internal flow. From evaluation of both the experimental and calculation results, it is noted that the channel vortices from the blade suction side were suppressed effectively by 3D GV, and the turbine efficiency with 3D GV was 0.41 % higher than that with the conventional 2D GV.

  5. Novel entropy coding and its application of the compression of 3D image and video signals

    OpenAIRE

    Amal, Mehanna

    2013-01-01

    This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London The broadcast industry is moving future Digital Television towards Super high resolution TV (4k or 8k) and/or 3D TV. This ultimately will increase the demand on data rate and subsequently the demand for highly efficient codecs. One of the techniques that researchers found it one of the promising technologies in the industry in the next few years is 3D Integral Image and Video due to ...

  6. Joint Applied Optics and Chinese Optics Letters Feature Introduction: Digital Holography and 3D Imaging

    Institute of Scientific and Technical Information of China (English)

    Ting-Chung Poon; Changhe Zhou; Toyohiko Yatagai; Byoungho Lee; Hongchen Zhai

    2011-01-01

    This feature issue is the fifth installment on digital holography since its inception four years ago.The last four issues have been published after the conclusion of each Topical Meeting "Digital Holography and 3D imaging (DH)." However,this feature issue includes a new key feature-Joint Applied Optics and Chinese Optics Letters Feature Issue.The DH Topical Meeting is the world's premier forum for disseminating the science and technology geared towards digital holography and 3D information processing.Since the meeting's inception in 2007,it has steadily and healthily grown to 130 presentations this year,held in Tokyo,Japan,May 2011.

  7. Dual array 3D electron cyclotron emission imaging at ASDEX Upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Classen, I. G. J., E-mail: I.G.J.Classen@differ.nl; Bogomolov, A. V. [FOM-Institute DIFFER, Dutch Institute for Fundamental Energy Research, 3430 BE Nieuwegein (Netherlands); Domier, C. W.; Luhmann, N. C. [Department of Applied Science, University of California at Davis, Davis, California 95616 (United States); Suttrop, W.; Boom, J. E. [Max-Planck-Institut für Plasmaphysik, Boltzmannstraße 2, 85748 Garching (Germany); Tobias, B. J. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08540 (United States); Donné, A. J. H. [FOM-Institute DIFFER, Dutch Institute for Fundamental Energy Research, 3430 BE Nieuwegein (Netherlands); Department of Applied Physics, Eindhoven University of Technology, 5600 MB Eindhoven (Netherlands)

    2014-11-15

    In a major upgrade, the (2D) electron cyclotron emission imaging diagnostic (ECEI) at ASDEX Upgrade has been equipped with a second detector array, observing a different toroidal position in the plasma, to enable quasi-3D measurements of the electron temperature. The new system will measure a total of 288 channels, in two 2D arrays, toroidally separated by 40 cm. The two detector arrays observe the plasma through the same vacuum window, both under a slight toroidal angle. The majority of the field lines are observed by both arrays simultaneously, thereby enabling a direct measurement of the 3D properties of plasma instabilities like edge localized mode filaments.

  8. Estimating Fiber Orientation Distribution Functions in 3D-Polarized Light Imaging.

    Science.gov (United States)

    Axer, Markus; Strohmer, Sven; Gräßel, David; Bücker, Oliver; Dohmen, Melanie; Reckfort, Julia; Zilles, Karl; Amunts, Katrin

    2016-01-01

    Research of the human brain connectome requires multiscale approaches derived from independent imaging methods ideally applied to the same object. Hence, comprehensible strategies for data integration across modalities and across scales are essential. We have successfully established a concept to bridge the spatial scales from microscopic fiber orientation measurements based on 3D-Polarized Light Imaging (3D-PLI) to meso- or macroscopic dimensions. By creating orientation distribution functions (pliODFs) from high-resolution vector data via series expansion with spherical harmonics utilizing high performance computing and supercomputing technologies, data fusion with Diffusion Magnetic Resonance Imaging has become feasible, even for a large-scale dataset such as the human brain. Validation of our approach was done effectively by means of two types of datasets that were transferred from fiber orientation maps into pliODFs: simulated 3D-PLI data showing artificial, but clearly defined fiber patterns and real 3D-PLI data derived from sections through the human brain and the brain of a hooded seal.

  9. Estimating Fiber Orientation Distribution Functions in 3D-Polarized Light Imaging.

    Science.gov (United States)

    Axer, Markus; Strohmer, Sven; Gräßel, David; Bücker, Oliver; Dohmen, Melanie; Reckfort, Julia; Zilles, Karl; Amunts, Katrin

    2016-01-01

    Research of the human brain connectome requires multiscale approaches derived from independent imaging methods ideally applied to the same object. Hence, comprehensible strategies for data integration across modalities and across scales are essential. We have successfully established a concept to bridge the spatial scales from microscopic fiber orientation measurements based on 3D-Polarized Light Imaging (3D-PLI) to meso- or macroscopic dimensions. By creating orientation distribution functions (pliODFs) from high-resolution vector data via series expansion with spherical harmonics utilizing high performance computing and supercomputing technologies, data fusion with Diffusion Magnetic Resonance Imaging has become feasible, even for a large-scale dataset such as the human brain. Validation of our approach was done effectively by means of two types of datasets that were transferred from fiber orientation maps into pliODFs: simulated 3D-PLI data showing artificial, but clearly defined fiber patterns and real 3D-PLI data derived from sections through the human brain and the brain of a hooded seal. PMID:27147981

  10. Estimating fiber orientation distribution functions in 3D-Polarized Light Imaging

    Directory of Open Access Journals (Sweden)

    Markus eAxer

    2016-04-01

    Full Text Available Research of the human brain connectome requires multiscale approaches derived from independent imaging methods ideally applied to the same object. Hence, comprehensible strategies for data integration across modalities and across scales are essential. We have successfully established a concept to bridge the spatial scales from microscopic fiber orientation measurements based on 3D-Polarized Light Imaging (3D-PLI to meso- or macroscopic dimensions. By creating orientation distribution functions (pliODFs from high-resolution vector data via series expansion with spherical harmonics utilizing high performance computing and supercomputing technologies, data fusion with Diffusion Magnetic Resonance Imaging has become feasible, even for a large-scale dataset such as the human brain. Validation of our approach was done effectively by means of two types of datasets that were transferred from fiber orientation maps into pliODFs: simulated 3D-PLI data showing artificial, but clearly defined fiber patterns and real 3D-PLI data derived from sections through the human brain and the brain of a hooded seal.

  11. Integration of 3D anatomical data obtained by CT imaging and 3D optical scanning for computer aided implant surgery

    OpenAIRE

    Paoli Alessandro; Barone Sandro; Chessa Giacomo; Frisardi Gianni; Razionale Armando; Frisardi Flavio

    2011-01-01

    Abstract Background A precise placement of dental implants is a crucial step to optimize both prosthetic aspects and functional constraints. In this context, the use of virtual guiding systems has been recognized as a fundamental tool to control the ideal implant position. In particular, complex periodontal surgeries can be performed using preoperative planning based on CT data. The critical point of the procedure relies on the lack of accuracy in transferring CT planning information to surgi...

  12. 3D-imaging of the knee with an optimized 3D-FSE-sequence and a 15-channel knee-coil

    International Nuclear Information System (INIS)

    Objectives: To evaluate the clinical usefulness of an optimized 3D-Fast-Spin-Echo-sequence (3D-SPACE) in combination with a 15-channel knee-coil for 3D-imaging of the knee at 3 T. Methods: 15 volunteers and 50 consecutive patients were examined at 3 T with fat-saturated moderately T2-weighted 3D-SPACE (Voxel-size (VS): 0.6 mm × 0.5 mm × 0.5 mm/acquisition-time (AT) 10:44 min) using a 15-channel knee-