WorldWideScience

Sample records for 3d volume imaging

  1. Extended gray level co-occurrence matrix computation for 3D image volume

    Science.gov (United States)

    Salih, Nurulazirah M.; Dewi, Dyah Ekashanti Octorina

    2017-02-01

    Gray Level Co-occurrence Matrix (GLCM) is one of the main techniques for texture analysis that has been widely used in many applications. Conventional GLCMs usually focus on two-dimensional (2D) image texture analysis only. However, a three-dimensional (3D) image volume requires specific texture analysis computation. In this paper, an extended 2D to 3D GLCM approach based on the concept of multiple 2D plane positions and pixel orientation directions in the 3D environment is proposed. The algorithm was implemented by breaking down the 3D image volume into 2D slices based on five different plane positions (coordinate axes and oblique axes) resulting in 13 independent directions, then calculating the GLCMs. The resulted GLCMs were averaged to obtain normalized values, then the 3D texture features were calculated. A preliminary examination was performed on a 3D image volume (64 x 64 x 64 voxels). Our analysis confirmed that the proposed technique is capable of extracting the 3D texture features from the extended GLCMs approach. It is a simple and comprehensive technique that can contribute to the 3D image analysis.

  2. GPU-Based 3D Cone-Beam CT Image Reconstruction for Large Data Volume

    Directory of Open Access Journals (Sweden)

    Xing Zhao

    2009-01-01

    Full Text Available Currently, 3D cone-beam CT image reconstruction speed is still a severe limitation for clinical application. The computational power of modern graphics processing units (GPUs has been harnessed to provide impressive acceleration of 3D volume image reconstruction. For extra large data volume exceeding the physical graphic memory of GPU, a straightforward compromise is to divide data volume into blocks. Different from the conventional Octree partition method, a new partition scheme is proposed in this paper. This method divides both projection data and reconstructed image volume into subsets according to geometric symmetries in circular cone-beam projection layout, and a fast reconstruction for large data volume can be implemented by packing the subsets of projection data into the RGBA channels of GPU, performing the reconstruction chunk by chunk and combining the individual results in the end. The method is evaluated by reconstructing 3D images from computer-simulation data and real micro-CT data. Our results indicate that the GPU implementation can maintain original precision and speed up the reconstruction process by 110–120 times for circular cone-beam scan, as compared to traditional CPU implementation.

  3. Crest lines extraction in volume 3D medical images : a multi-scale approach

    OpenAIRE

    Monga, Olivier; Lengagne, Richard; Deriche, Rachid

    1994-01-01

    Projet SYNTIM; Recently, we have shown that the differential properties of the surfaces represented by 3D volumic images can be recovered using their partial derivatives. For instance, the crest lines can be characterized by the first, second and third partial derivatives of the grey level function $I(x,y,z)$. In this paper, we show that~: - the computation of the partial derivatives of an image can be improved using recursive filters which approximate the Gaussian filter, - a multi-scale app...

  4. Volume estimation of tonsil phantoms using an oral camera with 3D imaging.

    Science.gov (United States)

    Das, Anshuman J; Valdez, Tulio A; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C; Raskar, Ramesh

    2016-04-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky's classification of tonsillar hypertrophy as well as intraoperative volume estimations.

  5. Volume estimation of tonsil phantoms using an oral camera with 3D imaging

    Science.gov (United States)

    Das, Anshuman J.; Valdez, Tulio A.; Vargas, Jose Arbouin; Saksupapchon, Punyapat; Rachapudi, Pushyami; Ge, Zhifei; Estrada, Julio C.; Raskar, Ramesh

    2016-01-01

    Three-dimensional (3D) visualization of oral cavity and oropharyngeal anatomy may play an important role in the evaluation for obstructive sleep apnea (OSA). Although computed tomography (CT) and magnetic resonance (MRI) imaging are capable of providing 3D anatomical descriptions, this type of technology is not readily available in a clinic setting. Current imaging of the oropharynx is performed using a light source and tongue depressors. For better assessment of the inferior pole of the tonsils and tongue base flexible laryngoscopes are required which only provide a two dimensional (2D) rendering. As a result, clinical diagnosis is generally subjective in tonsillar hypertrophy where current physical examination has limitations. In this report, we designed a hand held portable oral camera with 3D imaging capability to reconstruct the anatomy of the oropharynx in tonsillar hypertrophy where the tonsils get enlarged and can lead to increased airway resistance. We were able to precisely reconstruct the 3D shape of the tonsils and from that estimate airway obstruction percentage and volume of the tonsils in 3D printed realistic models. Our results correlate well with Brodsky’s classification of tonsillar hypertrophy as well as intraoperative volume estimations. PMID:27446667

  6. 3D colour visualization of label images using volume rendering techniques.

    Science.gov (United States)

    Vandenhouten, R; Kottenhoff, R; Grebe, R

    1995-01-01

    Volume rendering methods for the visualization of 3D image data sets have been developed and collected in a C library. The core algorithm consists of a perspective ray casting technique for a natural and realistic view of the 3D scene. New edge operator shading methods are employed for a fast and information preserving representation of surfaces. Control parameters of the algorithm can be tuned to have either smoothed surfaces or a very detailed rendering of the geometrical structure. Different objects can be distinguished by different colours. Shadow ray tracing has been implemented to improve the realistic impression of the 3D image. For a simultaneous representation of objects in different depths, hiding each other, two types of transparency mode are used (wireframe and glass transparency). Single objects or groups of objects can be excluded from the rendering (peeling). Three orthogonal cutting planes or one arbitrarily placed cutting plane can be applied to the rendered objects in order to get additional information about inner structures, contours, and relative positions.

  7. Quantification of cerebral ventricle volume change of preterm neonates using 3D ultrasound images

    Science.gov (United States)

    Chen, Yimin; Kishimoto, Jessica; Qiu, Wu; de Ribaupierre, Sandrine; Fenster, Aaron; Chiu, Bernard

    2015-03-01

    Intraventricular hemorrhage (IVH) is a major cause of brain injury in preterm neonates. Quantitative measurement of ventricular dilation or shrinkage is important for monitoring patients and in evaluation of treatment options. 3D ultrasound (US) has been used to monitor the ventricle volume as a biomarker for ventricular dilation. However, volumetric quantification does not provide information as to where dilation occurs. The location where dilation occurs may be related to specific neurological problems later in life. For example, posterior horn enlargement, with thinning of the corpus callosum and parietal white matter fibres, could be linked to poor visuo-spatial abilities seen in hydrocephalic children. In this work, we report on the development and application of a method used to analyze local surface change of the ventricles of preterm neonates with IVH from 3D US images. The technique is evaluated using manual segmentations from 3D US images acquired in two imaging sessions. The surfaces from baseline and follow-up were registered and then matched on a point-by-point basis. The distance between each pair of corresponding points served as an estimate of local surface change of the brain ventricle at each vertex. The measurements of local surface change were then superimposed on the ventricle surface to produce the 3D local surface change map that provide information on the spatio-temporal dilation pattern of brain ventricles following IVH. This tool can be used to monitor responses to different treatment options, and may provide important information for elucidating the deficiencies a patient will have later in life.

  8. Speaking Volumes About 3-D

    Science.gov (United States)

    2002-01-01

    In 1999, Genex submitted a proposal to Stennis Space Center for a volumetric 3-D display technique that would provide multiple users with a 360-degree perspective to simultaneously view and analyze 3-D data. The futuristic capabilities of the VolumeViewer(R) have offered tremendous benefits to commercial users in the fields of medicine and surgery, air traffic control, pilot training and education, computer-aided design/computer-aided manufacturing, and military/battlefield management. The technology has also helped NASA to better analyze and assess the various data collected by its satellite and spacecraft sensors. Genex capitalized on its success with Stennis by introducing two separate products to the commercial market that incorporate key elements of the 3-D display technology designed under an SBIR contract. The company Rainbow 3D(R) imaging camera is a novel, three-dimensional surface profile measurement system that can obtain a full-frame 3-D image in less than 1 second. The third product is the 360-degree OmniEye(R) video system. Ideal for intrusion detection, surveillance, and situation management, this unique camera system offers a continuous, panoramic view of a scene in real time.

  9. 3D photoacoustic imaging

    Science.gov (United States)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  10. Advanced 3-D Ultrasound Imaging

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer

    The main purpose of the PhD project was to develop methods that increase the 3-D ultrasound imaging quality available for the medical personnel in the clinic. Acquiring a 3-D volume gives the medical doctor the freedom to investigate the measured anatomy in any slice desirable after the scan has...... been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D imaging. A limiting factor has traditionally been the low image quality achievable using...... Field II simulations and measurements with the ultrasound research scanner SARUS and a 3.5MHz 1024 element 2-D transducer array. In all investigations, 3-D synthetic aperture imaging achieved a smaller main-lobe, lower sidelobes, higher contrast, and better signal to noise ratio than parallel...

  11. Chest-wall segmentation in automated 3D breast ultrasound images using thoracic volume classification

    Science.gov (United States)

    Tan, Tao; van Zelst, Jan; Zhang, Wei; Mann, Ritse M.; Platel, Bram; Karssemeijer, Nico

    2014-03-01

    Computer-aided detection (CAD) systems are expected to improve effectiveness and efficiency of radiologists in reading automated 3D breast ultrasound (ABUS) images. One challenging task on developing CAD is to reduce a large number of false positives. A large amount of false positives originate from acoustic shadowing caused by ribs. Therefore determining the location of the chestwall in ABUS is necessary in CAD systems to remove these false positives. Additionally it can be used as an anatomical landmark for inter- and intra-modal image registration. In this work, we extended our previous developed chestwall segmentation method that fits a cylinder to automated detected rib-surface points and we fit the cylinder model by minimizing a cost function which adopted a term of region cost computed from a thoracic volume classifier to improve segmentation accuracy. We examined the performance on a dataset of 52 images where our previous developed method fails. Using region-based cost, the average mean distance of the annotated points to the segmented chest wall decreased from 7.57±2.76 mm to 6.22±2.86 mm.art.

  12. Volume change determination of metastatic lung tumors in CT images using 3-D template matching

    Science.gov (United States)

    Ambrosini, Robert D.; Wang, Peng; O'Dell, Walter G.

    2009-02-01

    The ability of a clinician to properly detect changes in the size of lung nodules over time is a vital element to both the diagnosis of malignant growths and the monitoring of the response of cancerous lesions to therapy. We have developed a novel metastasis sizing algorithm based on 3-D template matching with spherical tumor appearance models that were created to match the expected geometry of the tumors of interest while accounting for potential spatial offsets of nodules in the slice thickness direction. The spherical template that best-fits the overall volume of each lung metastasis was determined through the optimization of the 3-D normalized cross-correlation coefficients (NCCC) calculated between the templates and the nodules. A total of 17 different lung metastases were extracted manually from real patient CT datasets and reconstructed in 3-D using spherical harmonics equations to generate simulated nodules for testing our algorithm. Each metastasis 3-D shape was then subjected to 10%, 25%, 50%, 75% and 90% scaling of its volume to allow for 5 possible volume change combinations relative to the original size per each reconstructed nodule and inserted back into CT datasets with appropriate blurring and noise addition. When plotted against the true volume change, the nodule volume changes calculated by our algorithm for these 85 data points exhibited a high degree of accuracy (slope = 0.9817, R2 = 0.9957). Our results demonstrate that the 3-D template matching method can be an effective, fast, and accurate tool for automated sizing of metastatic tumors.

  13. 3-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon

    studies and in vivo. Phantom measurements are compared with their corresponding reference value, whereas the in vivo measurement is validated against the current golden standard for non-invasive blood velocity estimates, based on magnetic resonance imaging (MRI). The study concludes, that a high precision......, if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges......For the last decade, the field of ultrasonic vector flow imaging has gotten an increasingly attention, as the technique offers a variety of new applications for screening and diagnostics of cardiovascular pathologies. The main purpose of this PhD project was therefore to advance the field of 3-D...

  14. Partial volume segmentation in 3D of lesions and tissues in magnetic resonance images

    Science.gov (United States)

    Johnston, Brian; Atkins, M. Stella; Booth, Kellogg S.

    1994-05-01

    An important first step in diagnosis and treatment planning using tomographic imaging is differentiating and quantifying diseased as well as healthy tissue. One of the difficulties encountered in solving this problem to date has been distinguishing the partial volume constituents of each voxel in the image volume. Most proposed solutions to this problem involve analysis of planar images, in sequence, in two dimensions only. We have extended a model-based method of image segmentation which applies the technique of iterated conditional modes in three dimensions. A minimum of user intervention is required to train the algorithm. Partial volume estimates for each voxel in the image are obtained yielding fractional compositions of multiple tissue types for individual voxels. A multispectral approach is applied, where spatially registered data sets are available. The algorithm is simple and has been parallelized using a dataflow programming environment to reduce the computational burden. The algorithm has been used to segment dual echo MRI data sets of multiple sclerosis patients using lesions, gray matter, white matter, and cerebrospinal fluid as the partial volume constituents. The results of the application of the algorithm to these datasets is presented and compared to the manual lesion segmentation of the same data.

  15. Dixon imaging-based partial volume correction improves quantification of choline detected by breast 3D-MRSI

    Energy Technology Data Exchange (ETDEWEB)

    Minarikova, Lenka; Gruber, Stephan; Bogner, Wolfgang; Trattnig, Siegfried; Chmelik, Marek [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, MR Center of Excellence, Vienna (Austria); Pinker-Domenig, Katja; Baltzer, Pascal A.T.; Helbich, Thomas H. [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, Division of Molecular and Gender Imaging, Vienna (Austria)

    2014-09-14

    Our aim was to develop a partial volume (PV) correction method of choline (Cho) signals detected by breast 3D-magnetic resonance spectroscopic imaging (3D-MRSI), using information from water/fat-Dixon MRI. Following institutional review board approval, five breast cancer patients were measured at 3 T. 3D-MRSI (1 cm{sup 3} resolution, duration ∝11 min) and Dixon MRI (1 mm{sup 3}, ∝2 min) were measured in vivo and in phantoms. Glandular/lesion tissue was segmented from water/fat-Dixon MRI and transformed to match the resolution of 3D-MRSI. The resulting PV values were used to correct Cho signals. Our method was validated on a two-compartment phantom (choline/water and oil). PV values were correlated with the spectroscopic water signal. Cho signal variability, caused by partial-water/fat content, was tested in 3D-MRSI voxels located in/near malignant lesions. Phantom measurements showed good correlation (r = 0.99) with quantified 3D-MRSI water signals, and better homogeneity after correction. The dependence of the quantified Cho signal on the water/fat voxel composition was significantly (p < 0.05) reduced using Dixon MRI-based PV correction, compared to the original uncorrected data (1.60-fold to 3.12-fold) in patients. The proposed method allows quantification of the Cho signal in glandular/lesion tissue independent of water/fat composition in breast 3D-MRSI. This can improve the reproducibility of breast 3D-MRSI, particularly important for therapy monitoring. (orig.)

  16. Register cardiac fiber orientations from 3D DTI volume to 2D ultrasound image of rat hearts

    Science.gov (United States)

    Qin, Xulei; Wang, Silun; Shen, Ming; Zhang, Xiaodong; Lerakis, Stamatios; Wagner, Mary B.; Fei, Baowei

    2015-03-01

    Two-dimensional (2D) ultrasound or echocardiography is one of the most widely used examinations for the diagnosis of cardiac diseases. However, it only supplies the geometric and structural information of the myocardium. In order to supply more detailed microstructure information of the myocardium, this paper proposes a registration method to map cardiac fiber orientations from three-dimensional (3D) magnetic resonance diffusion tensor imaging (MR-DTI) volume to the 2D ultrasound image. It utilizes a 2D/3D intensity based registration procedure including rigid, log-demons, and affine transformations to search the best similar slice from the template volume. After registration, the cardiac fiber orientations are mapped to the 2D ultrasound image via fiber relocations and reorientations. This method was validated by six images of rat hearts ex vivo. The evaluation results indicated that the final Dice similarity coefficient (DSC) achieved more than 90% after geometric registrations; and the inclination angle errors (IAE) between the mapped fiber orientations and the gold standards were less than 15 degree. This method may provide a practical tool for cardiologists to examine cardiac fiber orientations on ultrasound images and have the potential to supply additional information for diagnosis of cardiac diseases.

  17. MRI data driven partial volume effects correction in PET imaging using 3D local multi-resolution analysis

    Energy Technology Data Exchange (ETDEWEB)

    Le Pogam, Adrien, E-mail: adrien.lepogam@univ-brest.fr [INSERM UMR 1101, LaTIM, Brest (France); Lamare, Frederic [Academic Nuclear Medicine Department, CHU Pellegrin, Bordeaux (France); Hatt, Mathieu [INSERM UMR 1101, LaTIM, Brest (France); Fernandez, Philippe [Academic Nuclear Medicine Department, CHU Pellegrin, Bordeaux (France); Le Rest, Catherine Cheze [INSERM UMR 1101, LaTIM, Brest (France); Academic Nuclear Medicine Department, CHU Poitiers, Poitiers (France); Visvikis, Dimitris [INSERM UMR 1101, LaTIM, Brest (France)

    2013-02-21

    PET partial volume effects (PVE) resulting from the limited resolution of PET scanners is still a quantitative issue that PET/MRI scanners do not solve by themselves. A recently proposed voxel-based locally adaptive 3D multi-resolution PVE correction based on the mutual analysis of wavelet decompositions was applied on 12 clinical {sup 18}F-FLT PET/T1 MRI images of glial tumors, and compared to a PET only voxel-wise iterative deconvolution approach. Quantitative and qualitative results demonstrated the interest of exploiting PET/MRI information with higher uptake increases (19±8% vs. 11±7%, p=0.02), as well as more convincing visual restoration of details within tumors with respect to deconvolution of the PET uptake only. Further studies are now required to demonstrate the accuracy of this restoration with histopathological validation of the uptake in tumors.

  18. 3D vector flow imaging

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes

    The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... conventional methods can estimate only the axial component. Several approaches for 3D vector velocity estimation have been suggested, but none of these methods have so far produced convincing in vivo results nor have they been adopted by commercial manufacturers. The basis for this project is the Transverse...... on the TO fields are suggested. They can be used to optimize the TO method. In the third part, a TO method for 3D vector velocity estimation is proposed. It employs a 2D phased array transducer and decouples the velocity estimation into three velocity components, which are estimated simultaneously based on 5...

  19. Recommendations from gynaecological (GYN) GEC ESTRO working group (II): concepts and terms in 3D image-based treatment planning in cervix cancer brachytherapy-3D dose volume parameters and aspects of 3D image-based anatomy, radiation physics, radiobiology.

    Science.gov (United States)

    Pötter, Richard; Haie-Meder, Christine; Van Limbergen, Erik; Barillot, Isabelle; De Brabandere, Marisol; Dimopoulos, Johannes; Dumas, Isabelle; Erickson, Beth; Lang, Stefan; Nulens, An; Petrow, Peter; Rownd, Jason; Kirisits, Christian

    2006-01-01

    The second part of the GYN GEC ESTRO working group recommendations is focused on 3D dose-volume parameters for brachytherapy of cervical carcinoma. Methods and parameters have been developed and validated from dosimetric, imaging and clinical experience from different institutions (University of Vienna, IGR Paris, University of Leuven). Cumulative dose volume histograms (DVH) are recommended for evaluation of the complex dose heterogeneity. DVH parameters for GTV, HR CTV and IR CTV are the minimum dose delivered to 90 and 100% of the respective volume: D90, D100. The volume, which is enclosed by 150 or 200% of the prescribed dose (V150, V200), is recommended for overall assessment of high dose volumes. V100 is recommended for quality assessment only within a given treatment schedule. For Organs at Risk (OAR) the minimum dose in the most irradiated tissue volume is recommended for reporting: 0.1, 1, and 2 cm3; optional 5 and 10 cm3. Underlying assumptions are: full dose of external beam therapy in the volume of interest, identical location during fractionated brachytherapy, contiguous volumes and contouring of organ walls for >2 cm3. Dose values are reported as absorbed dose and also taking into account different dose rates. The linear-quadratic radiobiological model-equivalent dose (EQD2)-is applied for brachytherapy and is also used for calculating dose from external beam therapy. This formalism allows systematic assessment within one patient, one centre and comparison between different centres with analysis of dose volume relations for GTV, CTV, and OAR. Recommendations for the transition period from traditional to 3D image-based cervix cancer brachytherapy are formulated. Supplementary data (available in the electronic version of this paper) deals with aspects of 3D imaging, radiation physics, radiation biology, dose at reference points and dimensions and volumes for the GTV and CTV (adding to [Haie-Meder C, Pötter R, Van Limbergen E et al. Recommendations from

  20. 3D Segmentation with an application of level set-method using MRI volumes for image guided surgery.

    Science.gov (United States)

    Bosnjak, A; Montilla, G; Villegas, R; Jara, I

    2007-01-01

    This paper proposes an innovation in the application for image guided surgery using a comparative study of three different method of segmentation. This segmentation method is faster than the manual segmentation of images, with the advantage that it allows to use the same patient as anatomical reference, which has more precision than a generic atlas. This new methodology for 3D information extraction is based on a processing chain structured of the following modules: 1) 3D Filtering: the purpose is to preserve the contours of the structures and to smooth the homogeneous areas; several filters were tested and finally an anisotropic diffusion filter was used. 2) 3D Segmentation. This module compares three different methods: Region growing Algorithm, Cubic spline hand assisted, and Level Set Method. It then proposes a Level Set-based on the front propagation method that allows the making of the reconstruction of the internal walls of the anatomical structures of the brain. 3) 3D visualization. The new contribution of this work consists on the visualization of the segmented model and its use in the pre-surgery planning.

  1. Anisotropic 3D texture synthesis with application to volume rendering

    DEFF Research Database (Denmark)

    Laursen, Lasse Farnung; Ersbøll, Bjarne Kjær; Bærentzen, Jakob Andreas

    2011-01-01

    images using a 12.1 megapixel camera. Next, we extend the volume rendering pipeline by creating a transfer function which yields not only color and opacity from the input intensity, but also texture coordinates for our synthesized 3D texture. Thus, we add texture to the volume rendered images....... This method is applied to a high quality visualization of a pig carcass, where samples of meat, bone, and fat have been used to produce the anisotropic 3D textures....

  2. Standard Splenic Volume Estimation in North Indian Adult Population: Using 3D Reconstruction of Abdominal CT Scan Images

    Directory of Open Access Journals (Sweden)

    Adil Asghar

    2011-01-01

    Full Text Available A prospective study was carried out to establish normative data for splenic dimensions in North Indian population and their correlation with physical standard on abdominal CT of 21 patients aged between 20 and 70 years having no splenic disorders. Splenic volume was measured by two methods—volume and surface rendering technique of Able 3D doctor software and prolate ellipsoid formula. Volumes measured by both the techniques were correlated with their physical standards. Mean splenic volume was 161.57±90.2 cm3 and range 45.7–271.46 cm3. The volume of spleen had linear correlation with body height (r=0.512, P<.05. Splenic volume (cm3 = 7 × height (cm − 961 can be used to generate normal standard volume of spleen as a function of body height in North Indian population (with 95% confidence interval. This formula can be used to objectively measure the size of the spleen in adults who have clinically suspected splenomegaly.

  3. High-quality 3D correction of ring and radiant artifacts in flat panel detector-based cone beam volume CT imaging.

    Science.gov (United States)

    Anas, Emran Mohammad Abu; Kim, Jae Gon; Lee, Soo Yeol; Hasan, Md Kamrul

    2011-10-07

    The use of an x-ray flat panel detector is increasingly becoming popular in 3D cone beam volume CT machines. Due to the deficient semiconductor array manufacturing process, the cone beam projection data are often corrupted by different types of abnormalities, which cause severe ring and radiant artifacts in a cone beam reconstruction image, and as a result, the diagnostic image quality is degraded. In this paper, a novel technique is presented for the correction of error in the 2D cone beam projections due to abnormalities often observed in 2D x-ray flat panel detectors. Template images are derived from the responses of the detector pixels using their statistical properties and then an effective non-causal derivative-based detection algorithm in 2D space is presented for the detection of defective and mis-calibrated detector elements separately. An image inpainting-based 3D correction scheme is proposed for the estimation of responses of defective detector elements, and the responses of the mis-calibrated detector elements are corrected using the normalization technique. For real-time implementation, a simplification of the proposed off-line method is also suggested. Finally, the proposed algorithms are tested using different real cone beam volume CT images and the experimental results demonstrate that the proposed methods can effectively remove ring and radiant artifacts from cone beam volume CT images compared to other reported techniques in the literature.

  4. Validation of MRI-based 3D digital atlas registration with histological and autoradiographic volumes: an anatomofunctional transgenic mouse brain imaging study.

    Science.gov (United States)

    Lebenberg, J; Hérard, A-S; Dubois, A; Dauguet, J; Frouin, V; Dhenain, M; Hantraye, P; Delzescaux, T

    2010-07-01

    Murine models are commonly used in neuroscience to improve our knowledge of disease processes and to test drug effects. To accurately study neuroanatomy and brain function in small animals, histological staining and ex vivo autoradiography remain the gold standards to date. These analyses are classically performed by manually tracing regions of interest, which is time-consuming. For this reason, only a few 2D tissue sections are usually processed, resulting in a loss of information. We therefore proposed to match a 3D digital atlas with previously 3D-reconstructed post mortem data to automatically evaluate morphology and function in mouse brain structures. We used a freely available MRI-based 3D digital atlas derived from C57Bl/6J mouse brain scans (9.4T). The histological and autoradiographic volumes used were obtained from a preliminary study in APP(SL)/PS1(M146L) transgenic mice, models of Alzheimer's disease, and their control littermates (PS1(M146L)). We first deformed the original 3D MR images to match our experimental volumes. We then applied deformation parameters to warp the 3D digital atlas to match the data to be studied. The reliability of our method was qualitatively and quantitatively assessed by comparing atlas-based and manual segmentations in 3D. Our approach yields faster and more robust results than standard methods in the investigation of post mortem mouse data sets at the level of brain structures. It also constitutes an original method for the validation of an MRI-based atlas using histology and autoradiography as anatomical and functional references, respectively.

  5. Visualizing Vertebrate Embryos with Episcopic 3D Imaging Techniques

    Directory of Open Access Journals (Sweden)

    Stefan H. Geyer

    2009-01-01

    Full Text Available The creation of highly detailed, three-dimensional (3D computer models is essential in order to understand the evolution and development of vertebrate embryos, and the pathogenesis of hereditary diseases. A still-increasing number of methods allow for generating digital volume data sets as the basis of virtual 3D computer models. This work aims to provide a brief overview about modern volume data–generation techniques, focusing on episcopic 3D imaging methods. The technical principles, advantages, and problems of episcopic 3D imaging are described. The strengths and weaknesses in its ability to visualize embryo anatomy and labeled gene product patterns, specifically, are discussed.

  6. IGES Interface for Medical 3-D Volume Data.

    Science.gov (United States)

    Chen, Gong; Yi, Hong; Ni, Zhonghua

    2005-01-01

    Although there are many medical image processing and virtual surgery systems that provide rather consummate 3D-visualization and data manipulation techniques, few of them can export the volume data for engineering analyze. The thesis presents an interface implementing IGES (initial graphics exchange specification). Volume data such as bones, skins and other tissues can be exported as IGES files to be directly used for engineering analysis.

  7. 3D Backscatter Imaging System

    Science.gov (United States)

    Turner, D. Clark (Inventor); Whitaker, Ross (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  8. Total-liver-volume perfusion CT using 3-D image fusion to improve detection and characterization of liver metastases

    NARCIS (Netherlands)

    Meijerink, Martijn; Waesberghe, van Jan; Weide, van der Lineke; Tol, van den Petrousjka; Meijer, Sybren; Kuijk, van Cornelis

    2008-01-01

    The purpose of this study was to evaluate the feasibility of a totalliver- volume perfusion CT (CTP) technique for the detection and characterization of livermetastases. Twenty patients underwent helical CT of the total liver volume before and 11 times after intravenous contrast-material injection.

  9. 3D Reconstruction of NMR Images

    Directory of Open Access Journals (Sweden)

    Peter Izak

    2007-01-01

    Full Text Available This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  10. 3D imaging in forensic odontology.

    Science.gov (United States)

    Evans, Sam; Jones, Carl; Plassmann, Peter

    2010-06-16

    This paper describes the investigation of a new 3D capture method for acquiring and subsequent forensic analysis of bite mark injuries on human skin. When documenting bite marks with standard 2D cameras errors in photographic technique can occur if best practice is not followed. Subsequent forensic analysis of the mark is problematic when a 3D structure is recorded into a 2D space. Although strict guidelines (BAFO) exist, these are time-consuming to follow and, due to their complexity, may produce errors. A 3D image capture and processing system might avoid the problems resulting from the 2D reduction process, simplifying the guidelines and reducing errors. Proposed Solution: a series of experiments are described in this paper to demonstrate that the potential of a 3D system might produce suitable results. The experiments tested precision and accuracy of the traditional 2D and 3D methods. A 3D image capture device minimises the amount of angular distortion, therefore such a system has the potential to create more robust forensic evidence for use in courts. A first set of experiments tested and demonstrated which method of forensic analysis creates the least amount of intra-operator error. A second set tested and demonstrated which method of image capture creates the least amount of inter-operator error and visual distortion. In a third set the effects of angular distortion on 2D and 3D methods of image capture were evaluated.

  11. Multiplane 3D superresolution optical fluctuation imaging

    CERN Document Server

    Geissbuehler, Stefan; Godinat, Aurélien; Bocchio, Noelia L; Dubikovskaya, Elena A; Lasser, Theo; Leutenegger, Marcel

    2013-01-01

    By switching fluorophores on and off in either a deterministic or a stochastic manner, superresolution microscopy has enabled the imaging of biological structures at resolutions well beyond the diffraction limit. Superresolution optical fluctuation imaging (SOFI) provides an elegant way of overcoming the diffraction limit in all three spatial dimensions by computing higher-order cumulants of image sequences of blinking fluorophores acquired with a conventional widefield microscope. So far, three-dimensional (3D) SOFI has only been demonstrated by sequential imaging of multiple depth positions. Here we introduce a versatile imaging scheme which allows for the simultaneous acquisition of multiple focal planes. Using 3D cross-cumulants, we show that the depth sampling can be increased. Consequently, the simultaneous acquisition of multiple focal planes reduces the acquisition time and hence the photo-bleaching of fluorescent markers. We demonstrate multiplane 3D SOFI by imaging the mitochondria network in fixed ...

  12. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  13. Effective incorporation of spatial information in a mutual information based 3D-2D registration of a CT volume to X-ray images.

    Science.gov (United States)

    Zheng, Guoyan

    2008-01-01

    This paper addresses the problem of estimating the 3D rigid pose of a CT volume of an object from its 2D X-ray projections. We use maximization of mutual information, an accurate similarity measure for multi-modal and mono-modal image registration tasks. However, it is known that the standard mutual information measure only takes intensity values into account without considering spatial information and its robustness is questionable. In this paper, instead of directly maximizing mutual information, we propose to use a variational approximation derived from the Kullback-Leibler bound. Spatial information is then incorporated into this variational approximation using a Markov random field model. The newly derived similarity measure has a least-squares form and can be effectively minimized by a multi-resolution Levenberg-Marquardt optimizer. Experimental results are presented on X-ray and CT datasets of a plastic phantom and a cadaveric spine segment.

  14. Fully Automatic 3D Reconstruction of Histological Images

    CERN Document Server

    Bagci, Ulas

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.

  15. 3D integral imaging with optical processing

    Science.gov (United States)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  16. Structured light field 3D imaging.

    Science.gov (United States)

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-05

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces.

  17. Heat Equation to 3D Image Segmentation

    Directory of Open Access Journals (Sweden)

    Nikolay Sirakov

    2006-04-01

    Full Text Available This paper presents a new approach, capable of 3D image segmentation and objects' surface reconstruction. The main advantages of the method are: large capture range; quick segmentation of a 3D scene/image to regions; multiple 3D objects reconstruction. The method uses centripetal force and penalty function to segment the entire 3D scene/image to regions containing a single 3D object. Each region is inscribed in a convex, smooth closed surface, which defines a centripetal force. Then the surface is evolved by the geometric heat differential equation toward the force's direction. The penalty function is defined to stop evolvement of those surface patches, whose normal vectors encountered object's surface. On the base of the theoretical model Forward Difference Algorithm was developed and coded by Mathematica. Stability convergence condition, truncation error and calculation complexity of the algorithm are determined. The obtained results, advantages and disadvantages of the method are discussed at the end of this paper.

  18. Multimodal evaluation of 2-D and 3-D ultrasound, computed tomography and magnetic resonance imaging in measurements of the thyroid volume using universally applicable cross-sectional imaging software: a phantom study.

    Science.gov (United States)

    Freesmeyer, Martin; Wiegand, Steffen; Schierz, Jan-Henning; Winkens, Thomas; Licht, Katharina

    2014-07-01

    A precise estimate of thyroid volume is necessary for making adequate therapeutic decisions and planning, as well as for monitoring therapy response. The goal of this study was to compare the precision of different volumetry methods. Thyroid-shaped phantoms were subjected to volumetry via 2-D and 3-D ultrasonography (US), computed tomography (CT) and magnetic resonance imaging (MRI). The 3-D US scans were performed using sensor navigation and mechanical sweeping methods. Volumetry calculation ensued with the conventional ellipsoid model and the manual tracing method. The study confirmed the superiority of manual tracing with CT and MRI volumetry of the thyroid, but extended this knowledge also to the superiority of the 3-D US method, regardless of whether sensor navigation or mechanical sweeping is used. A novel aspect was successful use of the same universally applicable cross-imaging software for all modalities.

  19. Effective incorporating spatial information in a mutual information based 3D-2D registration of a CT volume to X-ray images.

    Science.gov (United States)

    Zheng, Guoyan

    2010-10-01

    This paper addresses the problem of estimating the 3D rigid poses of a CT volume of an object from its 2D X-ray projection(s). We use maximization of mutual information, an accurate similarity measure for multi-modal and mono-modal image registration tasks. However, it is known that the standard mutual information measures only take intensity values into account without considering spatial information and their robustness is questionable. In this paper, instead of directly maximizing mutual information, we propose to use a variational approximation derived from the Kullback-Leibler bound. Spatial information is then incorporated into this variational approximation using a Markov random field model. The newly derived similarity measure has a least-squares form and can be effectively minimized by a multi-resolution Levenberg-Marquardt optimizer. Experiments were conducted on datasets from two applications: (a) intra-operative patient pose estimation from a limited number (e.g. 2) of calibrated fluoroscopic images, and (b) post-operative cup orientation estimation from a single standard X-ray radiograph with/without gonadal shielding. The experiment on intra-operative patient pose estimation showed a mean target registration accuracy of 0.8mm and a capture range of 11.5mm, while the experiment on estimating the post-operative cup orientation from a single X-ray radiograph showed a mean accuracy below 2 degrees for both anteversion and inclination. More importantly, results from both experiments demonstrated that the newly derived similarity measures were robust to occlusions in the X-ray image(s).

  20. Micromachined Ultrasonic Transducers for 3-D Imaging

    DEFF Research Database (Denmark)

    Christiansen, Thomas Lehrmann

    Real-time ultrasound imaging is a widely used technique in medical diagnostics. Recently, ultrasound systems offering real-time imaging in 3-D has emerged. However, the high complexity of the transducer probes and the considerable increase in data to be processed compared to conventional 2-D...... ultrasound imaging results in expensive systems, which limits the more wide-spread use and clinical development of volumetric ultrasound. The main goal of this thesis is to demonstrate new transducer technologies that can achieve real-time volumetric ultrasound imaging without the complexity and cost...... of state-of-the-art 3-D ultrasound systems. The focus is on row-column addressed transducer arrays. This previously sparsely investigated addressing scheme offers a highly reduced number of transducer elements, resulting in reduced transducer manufacturing costs and data processing. To produce...

  1. 3D Membrane Imaging and Porosity Visualization

    KAUST Repository

    Sundaramoorthi, Ganesh

    2016-03-03

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore size in different layers orthogonal and parallel to the membrane surface. The 3D-reconstruction enabled additionally the visualization of pore interconnectivity in different parts of the membrane. The method was demonstrated for a block copolymer porous membrane and can be extended to other membranes with application in ultrafiltration, supports for forward osmosis, etc, offering a complete view of the transport paths in the membrane.

  2. SOLIDFELIX: a transportable 3D static volume display

    Science.gov (United States)

    Langhans, Knut; Kreft, Alexander; Wörden, Henrik Tom

    2009-02-01

    Flat 2D screens cannot display complex 3D structures without the usage of different slices of the 3D model. Volumetric displays like the "FELIX 3D-Displays" can solve the problem. They provide space-filling images and are characterized by "multi-viewer" and "all-round view" capabilities without requiring cumbersome goggles. In the past many scientists tried to develop similar 3D displays. Our paper includes an overview from 1912 up to today. During several years of investigations on swept volume displays within the "FELIX 3D-Projekt" we learned about some significant disadvantages of rotating screens, for example hidden zones. For this reason the FELIX-Team started investigations also in the area of static volume displays. Within three years of research on our 3D static volume display at a normal high school in Germany we were able to achieve considerable results despite minor funding resources within this non-commercial group. Core element of our setup is the display volume which consists of a cubic transparent material (crystal, glass, or polymers doped with special ions, mainly from the rare earth group or other fluorescent materials). We focused our investigations on one frequency, two step upconversion (OFTS-UC) and two frequency, two step upconversion (TFTSUC) with IR-Lasers as excitation source. Our main interest was both to find an appropriate material and an appropriate doping for the display volume. Early experiments were carried out with CaF2 and YLiF4 crystals doped with 0.5 mol% Er3+-ions which were excited in order to create a volumetric pixel (voxel). In addition to that the crystals are limited to a very small size which is the reason why we later investigated on heavy metal fluoride glasses which are easier to produce in large sizes. Currently we are using a ZBLAN glass belonging to the mentioned group and making it possible to increase both the display volume and the brightness of the images significantly. Although, our display is currently

  3. A fully automatic, threshold-based segmentation method for the estimation of the Metabolic Tumor Volume from PET images: validation on 3D printed anthropomorphic oncological lesions

    Science.gov (United States)

    Gallivanone, F.; Interlenghi, M.; Canervari, C.; Castiglioni, I.

    2016-01-01

    18F-Fluorodeoxyglucose (18F-FDG) Positron Emission Tomography (PET) is a standard functional diagnostic technique to in vivo image cancer. Different quantitative paramters can be extracted from PET images and used as in vivo cancer biomarkers. Between PET biomarkers Metabolic Tumor Volume (MTV) has gained an important role in particular considering the development of patient-personalized radiotherapy treatment for non-homogeneous dose delivery. Different imaging processing methods have been developed to define MTV. The different proposed PET segmentation strategies were validated in ideal condition (e.g. in spherical objects with uniform radioactivity concentration), while the majority of cancer lesions doesn't fulfill these requirements. In this context, this work has a twofold objective: 1) to implement and optimize a fully automatic, threshold-based segmentation method for the estimation of MTV, feasible in clinical practice 2) to develop a strategy to obtain anthropomorphic phantoms, including non-spherical and non-uniform objects, miming realistic oncological patient conditions. The developed PET segmentation algorithm combines an automatic threshold-based algorithm for the definition of MTV and a k-means clustering algorithm for the estimation of the background. The method is based on parameters always available in clinical studies and was calibrated using NEMA IQ Phantom. Validation of the method was performed both in ideal (e.g. in spherical objects with uniform radioactivity concentration) and non-ideal (e.g. in non-spherical objects with a non-uniform radioactivity concentration) conditions. The strategy to obtain a phantom with synthetic realistic lesions (e.g. with irregular shape and a non-homogeneous uptake) consisted into the combined use of standard anthropomorphic phantoms commercially and irregular molds generated using 3D printer technology and filled with a radioactive chromatic alginate. The proposed segmentation algorithm was feasible in a

  4. Integration of 3D scale-based pseudo-enhancement correction and partial volume image segmentation for improving electronic colon cleansing in CT colonograpy.

    Science.gov (United States)

    Zhang, Hao; Li, Lihong; Zhu, Hongbin; Han, Hao; Song, Bowen; Liang, Zhengrong

    2014-01-01

    Orally administered tagging agents are usually used in CT colonography (CTC) to differentiate residual bowel content from native colonic structures. However, the high-density contrast agents tend to introduce pseudo-enhancement (PE) effect on neighboring soft tissues and elevate their observed CT attenuation value toward that of the tagged materials (TMs), which may result in an excessive electronic colon cleansing (ECC) since the pseudo-enhanced soft tissues are incorrectly identified as TMs. To address this issue, we integrated a 3D scale-based PE correction into our previous ECC pipeline based on the maximum a posteriori expectation-maximization partial volume (PV) segmentation. The newly proposed ECC scheme takes into account both the PE and PV effects that commonly appear in CTC images. We evaluated the new scheme on 40 patient CTC scans, both qualitatively through display of segmentation results, and quantitatively through radiologists' blind scoring (human observer) and computer-aided detection (CAD) of colon polyps (computer observer). Performance of the presented algorithm has shown consistent improvements over our previous ECC pipeline, especially for the detection of small polyps submerged in the contrast agents. The CAD results of polyp detection showed that 4 more submerged polyps were detected for our new ECC scheme over the previous one.

  5. Estimation of regional myocardial mass at risk based on distal arterial lumen volume and length using 3D micro-CT images.

    Science.gov (United States)

    Le, Huy; Wong, Jerry T; Molloi, Sabee

    2008-09-01

    The determination of regional myocardial mass at risk distal to a coronary occlusion provides valuable prognostic information for a patient with coronary artery disease. The coronary arterial system follows a design rule which allows for the use of arterial branch length and lumen volume to estimate regional myocardial mass at risk. Image processing techniques, such as segmentation, skeletonization and arterial network tracking, are presented for extracting anatomical details of the coronary arterial system using micro-computed tomography (micro-CT). Moreover, a method of assigning tissue voxels to their corresponding arterial branches is presented to determine the dependent myocardial region. The proposed micro-CT technique was utilized to investigate the relationship between the sum of the distal coronary arterial branch lengths and volumes to the dependent regional myocardial mass using a polymer cast of a porcine heart. The correlations of the logarithm of the total distal arterial lengths (L) to the logarithm of the regional myocardial mass (M) for the left anterior descending (LAD), left circumflex (LCX) and right coronary (RCA) arteries were log(L)=0.73log(M)+0.09 (R=0.78), log(L)=0.82log(M)+0.05 (R=0.77) and log(L)=0.85log(M)+0.05 (R=0.87), respectively. The correlation of the logarithm of the total distal arterial lumen volumes (V) to the logarithm of the regional myocardial mass for the LAD, LCX and RCA were log(V)=0.93log(M)-1.65 (R=0.81), log(V)=1.02log(M)-1.79 (R=0.78) and log(V)=1.17log(M)-2.10 (R=0.82), respectively. These morphological relations did not change appreciably for diameter truncations of 600-1400microm. The results indicate that the image processing procedures successfully extracted information from a large 3D dataset of the coronary arterial tree to provide prognostic indications in the form of arterial tree parameters and anatomical area at risk.

  6. High resolution 3-D wavelength diversity imaging

    Science.gov (United States)

    Farhat, N. H.

    1981-09-01

    A physical optics, vector formulation of microwave imaging of perfectly conducting objects by wavelength and polarization diversity is presented. The results provide the theoretical basis for optimal data acquisition and three-dimensional tomographic image retrieval procedures. These include: (a) the selection of highly thinned (sparse) receiving array arrangements capable of collecting large amounts of information about remote scattering objects in a cost effective manner and (b) techniques for 3-D tomographic image reconstruction and display in which polarization diversity data is fully accounted for. Data acquisition employing a highly attractive AMTDR (Amplitude Modulated Target Derived Reference) technique is discussed and demonstrated by computer simulation. Equipment configuration for the implementation of the AMTDR technique is also given together with a measurement configuration for the implementation of wavelength diversity imaging in a roof experiment aimed at imaging a passing aircraft. Extension of the theory presented to 3-D tomographic imaging of passive noise emitting objects by spectrally selective far field cross-correlation measurements is also given. Finally several refinements made in our anechoic-chamber measurement system are shown to yield drastic improvement in performance and retrieved image quality.

  7. A 3D surface imaging system for assessing human obesity

    Science.gov (United States)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  8. High Frame Rate Synthetic Aperture 3D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Holbek, Simon; Stuart, Matthias Bo

    2016-01-01

    3-D blood flow quantification with high spatial and temporal resolution would strongly benefit clinical research on cardiovascular pathologies. Ultrasonic velocity techniques are known for their ability to measure blood flow with high precision at high spatial and temporal resolution. However......, current volumetric ultrasonic flow methods are limited to one velocity component or restricted to a reduced field of view (FOV), e.g. fixed imaging planes, in exchange for higher temporal resolutions. To solve these problems, a previously proposed accurate 2-D high frame rate vector flow imaging (VFI......) technique is extended to estimate the 3-D velocity components inside a volume at high temporal resolutions (

  9. Scaling relations between bone volume and bone structure as found using 3D µCT images of the trabecular bone taken from different skeletal sites

    Science.gov (United States)

    Raeth, Christoph; Müller, Dirk; Sidorenko, Irina; Monetti, Roberto; Eckstein, Felix; Matsuura, Maiko; Lochmüller, Eva-Maria; Zysset, Philippe K.; Bauer, Jan

    2010-03-01

    According to Wolff's law bone remodels in response to the mechanical stresses it experiences so as to produce a minimal-weight structure that is adapted to its applied stresses. Here, we investigate the relations between bone volume and structure for the trabecular bone using 3D μCT images taken from different skeletal sites in vitro, namely from the distal radii (96 specimens), thoracic (73 specimens) and lumbar vertebrae (78 specimens). We determine the local structure of the trabecular network by calculating isotropic and anisotropic scaling indices (α, αz). These measures have been proven to be able to discriminate rod- from sheet-like structures and to quantify the alignment of structures with respect to a preferential direction as given by the direction of the external force. Comparing global structure measures derived from the scaling indices (mean, standard deviation) with the bone mass (BV/TV) we find that all correlations obey very accurately power laws with scaling exponents of 0.14, 0.12, 0.15 (~), -0.2, -017, -0.17 (σ(αz)), 0.09, 0.05, 0.07 (~) and -0.20, -0.11 ,-0.13 (σ(αz)) distal radius, thoracic vertebra and lumbar vertebra respectively. Thus, these relations turn out to be site-independent, albeit the mechanical stresses to which the bones of the forearm and the spine are exposed, are quite different. The similar alignment might not be in agreement with a universal validity of Wolff's law. On the other hand, such universal power law relations may allow to develop additional diagnostic means to better assess healthy and osteoporotic bone.

  10. 3D MRI volume sizing of knee meniscus cartilage.

    Science.gov (United States)

    Stone, K R; Stoller, D W; Irving, S G; Elmquist, C; Gildengorin, G

    1994-12-01

    Meniscal replacement by allograft and meniscal regeneration through collagen meniscal scaffolds have been recently reported. To evaluate the effectiveness of a replaced or regrown meniscal cartilage, a method for measuring the size and function of the regenerated tissue in vivo is required. To solve this problem, we developed and evaluated a magnetic resonance imaging (MRI) technique to measure the volume of meniscal tissues. Twenty-one intact fresh cadaver knees were evaluated and scanned with MRI for meniscal volume sizing. The sizing sequence was repeated six times for each of 21 lateral and 12 medial menisci. The menisci were then excised and measured by water volume displacement. Each volume displacement measurement was repeated six times. The MRI technique employed to measure the volume of the menisci was shown to correspond to that of the standard measure of volume and was just as precise. However, the MRI technique consistently underestimated the actual volume. The average of the coefficient of variation for lateral volumes was 0.04 and 0.05 for the water and the MRI measurements, respectively. For medial measurements it was 0.04 and 0.06. The correlation for the lateral menisci was r = 0.45 (p = 0.04) and for the medial menisci it was r = 0.57 (p = 0.05). We conclude that 3D MRI is precise and repeatable but not accurate when used to measure meniscal volume in vivo and therefore may only be useful for evaluating changes in meniscal allografts and meniscal regeneration templates over time.

  11. 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    The aim of this project has been to implement a software system, that is able to create a 3-D reconstruction from two or more 2-D photographic images made from different positions. The height is determined from the disparity difference of the images. The general purpose of the system is mapping...... of planetary surfaces, but other purposes is considered as well. The system performance is measured with respect to the precision and the time consumption.The reconstruction process is divided into four major areas: Acquisition, calibration, matching/reconstruction and presentation. Each of these areas...... are treated individually. A detailed treatment of various lens distortions is required, in order to correct for these problems. This subject is included in the acquisition part. In the calibration part, the perspective distortion is removed from the images. Most attention has been paid to the matching problem...

  12. Validation of Blood Volume Fraction Quantification with 3D Gradient Echo Dynamic Contrast-Enhanced Magnetic Resonance Imaging in Porcine Skeletal Muscle

    Science.gov (United States)

    Söhner, Anika; Maaß, Marc; Sauerwein, Wolfgang; Möllmann, Dorothe; Baba, Hideo Andreas; Kramer, Martin; Lüdemann, Lutz

    2017-01-01

    The purpose of this study was to assess the accuracy of fractional blood volume (vb) estimates in low-perfused and low-vascularized tissue using dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). The results of different MRI methods were compared with histology to evaluate the accuracy of these methods under clinical conditions. vb was estimated by DCE-MRI using a 3D gradient echo sequence with k-space undersampling in five muscle groups in the hind leg of 9 female pigs. Two gadolinium-based contrast agents (CA) were used: a rapidly extravasating, extracellular, gadolinium-based, low-molecular-weight contrast agent (LMCA, gadoterate meglumine) and an extracellular, gadolinium-based, albumin-binding, slowly extravasating blood pool contrast agent (BPCA, gadofosveset trisodium). LMCA data were evaluated using the extended Tofts model (ETM) and the two-compartment exchange model (2CXM). The images acquired with administration of the BPCA were used to evaluate the accuracy of vb estimation with a bolus deconvolution technique (BD) and a method we call equilibrium MRI (EqMRI). The latter calculates the ratio of the magnitude of the relaxation rate change in the tissue curve at an approximate equilibrium state to the height of the same area of the arterial input function (AIF). Immunohistochemical staining with isolectin was used to label endothelium. A light microscope was used to estimate the fractional vascular area by relating the vascular region to the total tissue region (immunohistochemical vessel staining, IHVS). In addition, the percentage fraction of vascular volume was determined by multiplying the microvascular density (MVD) with the average estimated capillary lumen, π(d2)2, where d = 8μm is the assumed capillary diameter (microvascular density estimation, MVDE). Except for ETM values, highly significant correlations were found between most of the MRI methods investigated. In the cranial thigh, for example, the vb medians (interquartile range

  13. Photogrammetric 3D reconstruction using mobile imaging

    Science.gov (United States)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  14. 3D neuromelanin-sensitive magnetic resonance imaging with semi-automated volume measurement of the substantia nigra pars compacta for diagnosis of Parkinson's disease

    Energy Technology Data Exchange (ETDEWEB)

    Ogisu, Kimihiro; Shirato, Hiroki [Hokkaido University Graduate School of Medicine, Department of Radiology, Hokkaido (Japan); Kudo, Kohsuke; Sasaki, Makoto [Iwate Medical University, Division of Ultrahigh Field MRI, Iwate (Japan); Sakushima, Ken; Yabe, Ichiro; Sasaki, Hidenao [Hokkaido University Hospital, Department of Neurology, Hokkaido (Japan); Terae, Satoshi; Nakanishi, Mitsuhiro [Hokkaido University Hospital, Department of Radiology, Hokkaido (Japan)

    2013-06-15

    Neuromelanin-sensitive MRI has been reported to be used in the diagnosis of Parkinson's disease (PD), which results from loss of dopamine-producing cells in the substantia nigra pars compacta (SNc). In this study, we aimed to apply a 3D turbo field echo (TFE) sequence for neuromelanin-sensitive MRI and to evaluate the diagnostic performance of semi-automated method for measurement of SNc volume in patients with PD. We examined 18 PD patients and 27 healthy volunteers (control subjects). A 3D TFE technique with off-resonance magnetization transfer pulse was used for neuromelanin-sensitive MRI on a 3T scanner. The SNc volume was semi-automatically measured using a region-growing technique at various thresholds (ranging from 1.66 to 2.48), with the signals measured relative to that for the superior cerebellar peduncle. Receiver operating characteristic (ROC) analysis was performed at all thresholds. Intra-rater reproducibility was evaluated by intraclass correlation coefficient (ICC). The average SNc volume in the PD group was significantly smaller than that in the control group at all the thresholds (P < 0.01, student t test). At higher thresholds (>2.0), the area under the curve of ROC (Az) increased (0.88). In addition, we observed balanced sensitivity and specificity (0.83 and 0.85, respectively). At lower thresholds, sensitivity tended to increase but specificity reduced in comparison with that at higher thresholds. ICC was larger than 0.9 when the threshold was over 1.86. Our method can distinguish the PD group from the control group with high sensitivity and specificity, especially for early stage of PD. (orig.)

  15. De la manipulation des images 3D

    Directory of Open Access Journals (Sweden)

    Geneviève Pinçon

    2012-04-01

    Full Text Available Si les technologies 3D livrent un enregistrement précis et pertinent des graphismes pariétaux, elles offrent également des applications particulièrement intéressantes pour leur analyse. À travers des traitements sur nuage de points et des simulations, elles autorisent un large éventail de manipulations touchant autant à l’observation qu’à l’étude des œuvres pariétales. Elles permettent notamment une perception affinée de leur volumétrie, et deviennent des outils de comparaison de formes très utiles dans la reconstruction des chronologies pariétales et dans l’appréhension des analogies entre différents sites. Ces outils analytiques sont ici illustrés par les travaux originaux menés sur les sculptures pariétales des abris du Roc-aux-Sorciers (Angles-sur-l’Anglin, Vienne et de la Chaire-à-Calvin (Mouthiers-sur-Boëme, Charente.If 3D technologies allow an accurate and relevant recording of rock art, they also offer several interesting applications for its analysis. Through spots clouds treatments and simulations, they permit a wide range of manipulations concerning figurations observation and study. Especially, they allow a fine perception of their volumetry. They become efficient tools for forms comparisons, very useful in the reconstruction of graphic ensemble chronologies and for inter-sites analogies. These analytical tools are illustrated by the original works done on the sculptures of Roc-aux-Sorciers (Angles-sur-l’Anglin, Vienne and Chaire-à-Calvin (Mouthiers-sur-Boëme, Charente rock-shelters.

  16. Automated curved planar reformation of 3D spine images

    Energy Technology Data Exchange (ETDEWEB)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo [University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, SI-1000 Ljubljana (Slovenia)

    2005-10-07

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  17. 3D object-oriented image analysis in 3D geophysical modelling

    DEFF Research Database (Denmark)

    Fadel, I.; van der Meijde, M.; Kerle, N.

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  18. Imaging fault zones using 3D seismic image processing techniques

    Science.gov (United States)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  19. Handbook of 3D machine vision optical metrology and imaging

    CERN Document Server

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  20. Progress in 3D imaging and display by integral imaging

    Science.gov (United States)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  1. Geometric Deformations Based on 3D Volume Morphing

    Institute of Scientific and Technical Information of China (English)

    JIN Xiaogang; WAN Huagen; PENG Qunsheng

    2001-01-01

    This paper presents a new geometric deformation method based on 3D volume morphing by using a new concept called directional polar coordinate. The user specifies the source control object and the destination control object which act as the embedded spaces.The source and the destination control objects determine a 3D volume morphing which maps the space enclosed in the source control object to that of the destination control object. By embedding the object to be deformed into the source control object, the 3D volume morphing determines the deformed object automatically without the tiring moving of control points.Experiments show that this deformation model is efficient and intuitive, and it can achieve some deformation effects which are difficult to achieve for traditional methods.

  2. Shaping 3-D Volumes in Immersive Virtual Environments

    DEFF Research Database (Denmark)

    Stenholt, Rasmus

    Shaping 3-D volumes is an important part of many interactions in immersive virtual environments. The range of possible applications is wide. For instance, the ability to select objects in virtual environments is very often based on defin- ing and controlling a selection volume. This is especially...... true, if the intention is to select multiple objects. Another important application area is the manipula- tion of objects through the use of controllable handles, or widgets. Such widgets are often associated with a bounding volume around the object to be manipu- lated. Such techniques are both well...... of efficiently and precisely defining a 3-D box is a fundamental one to investigate. The first paper does this by analysing the practical task of defining a 3-D box as the equivalent task of defining its degrees- of-freedom. This analysis leads to the introduction of a new way of shaping a box from just three...

  3. Dynamic 3D computed tomography scanner for vascular imaging

    Science.gov (United States)

    Lee, Mark K.; Holdsworth, David W.; Fenster, Aaron

    2000-04-01

    A 3D dynamic computed-tomography (CT) scanner was developed for imaging objects undergoing periodic motion. The scanner system has high spatial and sufficient temporal resolution to produce quantitative tomographic/volume images of objects such as excised arterial samples perfused under physiological pressure conditions and enables the measurements of the local dynamic elastic modulus (Edyn) of the arteries in the axial and longitudinal directions. The system was comprised of a high resolution modified x-ray image intensifier (XRII) based computed tomographic system and a computer-controlled cardiac flow simulator. A standard NTSC CCD camera with a macro lens was coupled to the electro-optically zoomed XRII to acquire dynamic volumetric images. Through prospective cardiac gating and computer synchronized control, a time-resolved sequence of 20 mm thick high resolution volume images of porcine aortic specimens during one simulated cardiac cycle were obtained. Performance evaluation of the scanners illustrated that tomographic images can be obtained with resolution as high as 3.2 mm-1 with only a 9% decrease in the resolution for objects moving at velocities of 1 cm/s in 2D mode and static spatial resolution of 3.55 mm-1 with only a 14% decrease in the resolution in 3D mode for objects moving at a velocity of 10 cm/s. Application of the system for imaging of intact excised arterial specimens under simulated physiological flow/pressure conditions enabled measurements of the Edyn of the arteries with a precision of +/- kPa for the 3D scanner. Evaluation of the Edyn in the axial and longitudinal direction produced values of 428 +/- 35 kPa and 728 +/- 71 kPa, demonstrating the isotropic and homogeneous viscoelastic nature of the vascular specimens. These values obtained from the Dynamic CT systems were not statistically different (p less than 0.05) from the values obtained by standard uniaxial tensile testing and volumetric measurements.

  4. 3D/2D Registration of medical images

    OpenAIRE

    Tomaževič, D.

    2008-01-01

    The topic of this doctoral dissertation is registration of 3D medical images to corresponding projective 2D images, referred to as 3D/2D registration. There are numerous possible applications of 3D/2D registration in image-aided diagnosis and treatment. In most of the applications, 3D/2D registration provides the location and orientation of the structures in a preoperative 3D CT or MR image with respect to intraoperative 2D X-ray images. The proposed doctoral dissertation tries to find origin...

  5. Analysis of information for cerebrovascular disorders obtained by 3D MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yoshikawa, Kohki [Tokyo Univ. (Japan). Inst. of Medical Science; Yoshioka, Naoki; Watanabe, Fumio; Shiono, Takahiro; Sugishita, Morihiro; Umino, Kazunori

    1995-12-01

    Recently, it becomes easy to analyze information obtained by 3D MR imaging due to remarkable progress of fast MR imaging technique and analysis tool. Six patients suffered from aphasia (4 cerebral infarctions and 2 bleedings) were performed 3D MR imaging (3D FLASH-TR/TE/flip angle; 20-50 msec/6-10 msec/20-30 degrees) and their volume information were analyzed by multiple projection reconstruction (MPR), surface rendering 3D reconstruction, and volume rendering 3D reconstruction using Volume Design PRO (Medical Design Co., Ltd.). Four of them were diagnosed as Broca`s aphasia clinically and their lesions could be detected around the cortices of the left inferior frontal gyrus. Another 2 patients were diagnosed as Wernicke`s aphasia and the lesions could be detected around the cortices of the left supramarginal gyrus. This technique for 3D volume analyses would provide quite exact locational information about cerebral cortical lesions. (author).

  6. Super deep 3D images from a 3D omnifocus video camera.

    Science.gov (United States)

    Iizuka, Keigo

    2012-02-20

    When using stereographic image pairs to create three-dimensional (3D) images, a deep depth of field in the original scene enhances the depth perception in the 3D image. The omnifocus video camera has no depth of field limitations and produces images that are in focus throughout. By installing an attachment on the omnifocus video camera, real-time super deep stereoscopic pairs of video images were obtained. The deeper depth of field creates a larger perspective image shift, which makes greater demands on the binocular fusion of human vision. A means of reducing the perspective shift without harming the depth of field was found.

  7. A system for finding a 3D target without a 3D image

    Science.gov (United States)

    West, Jay B.; Maurer, Calvin R., Jr.

    2008-03-01

    We present here a framework for a system that tracks one or more 3D anatomical targets without the need for a preoperative 3D image. Multiple 2D projection images are taken using a tracked, calibrated fluoroscope. The user manually locates each target on each of the fluoroscopic views. A least-squares minimization algorithm triangulates the best-fit position of each target in the 3D space of the tracking system: using the known projection matrices from 3D space into image space, we use matrix minimization to find the 3D position that projects closest to the located target positions in the 2D images. A tracked endoscope, whose projection geometry has been pre-calibrated, is then introduced to the operating field. Because the position of the targets in the tracking space is known, a rendering of the targets may be projected onto the endoscope view, thus allowing the endoscope to be easily brought into the target vicinity even when the endoscope field of view is blocked, e.g. by blood or tissue. An example application for such a device is trauma surgery, e.g., removal of a foreign object. Time, scheduling considerations and concern about excessive radiation exposure may prohibit the acquisition of a 3D image, such as a CT scan, which is required for traditional image guidance systems; it is however advantageous to have 3D information about the target locations available, which is not possible using fluoroscopic guidance alone.

  8. 3D Image Synthesis for B—Reps Objects

    Institute of Scientific and Technical Information of China (English)

    黄正东; 彭群生; 等

    1991-01-01

    This paper presents a new algorithm for generating 3D images of B-reps objects with trimmed surface boundaries.The 3D image is a discrete voxel-map representation within a Cubic Frame Buffer (CFB).The definition of 3D images for curve,surface and solid object are introduced which imply the connectivity and fidelity requirements.Adaptive Forward Differencing matrix (AFD-matrix) for 1D-3D manifolds in 3D space is developed.By setting rules to update the AFD-matrix,the forward difference direction and stepwise can be adjusted.Finally,an efficient algorithm is presented based on the AFD-matrix concept for converting the object in 3D space to 3D image in 3D discrete space.

  9. IMAGE SELECTION FOR 3D MEASUREMENT BASED ON NETWORK DESIGN

    Directory of Open Access Journals (Sweden)

    T. Fuse

    2015-05-01

    Full Text Available 3D models have been widely used by spread of many available free-software. On the other hand, enormous images can be easily acquired, and images are utilized for creating the 3D models recently. However, the creation of 3D models by using huge amount of images takes a lot of time and effort, and then efficiency for 3D measurement are required. In the efficiency strategy, the accuracy of the measurement is also required. This paper develops an image selection method based on network design that means surveying network construction. The proposed method uses image connectivity graph. By this, the image selection problem is regarded as combinatorial optimization problem and the graph cuts technique can be applied. Additionally, in the process of 3D reconstruction, low quality images and similarity images are extracted and removed. Through the experiments, the significance of the proposed method is confirmed. Potential to efficient and accurate 3D measurement is implied.

  10. Glasses-free 3D viewing systems for medical imaging

    Science.gov (United States)

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Li, Li M.

    2012-04-01

    In this work we show two different glasses-free 3D viewing systems for medical imaging: a stereoscopic system that employs a vertically dispersive holographic screen (VDHS) and a multi-autostereoscopic system, both used to produce 3D MRI/CT images. We describe how to obtain a VDHS in holographic plates optimized for this application, with field of view of 7 cm to each eye and focal length of 25 cm, showing images done with the system. We also describe a multi-autostereoscopic system, presenting how it can generate 3D medical imaging from viewpoints of a MRI or CT image, showing results of a 3D angioresonance image.

  11. 3D reconstruction of multiple stained histology images

    Directory of Open Access Journals (Sweden)

    Yi Song

    2013-01-01

    Full Text Available Context: Three dimensional (3D tissue reconstructions from the histology images with different stains allows the spatial alignment of structural and functional elements highlighted by different stains for quantitative study of many physiological and pathological phenomena. This has significant potential to improve the understanding of the growth patterns and the spatial arrangement of diseased cells, and enhance the study of biomechanical behavior of the tissue structures towards better treatments (e.g. tissue-engineering applications. Methods: This paper evaluates three strategies for 3D reconstruction from sets of two dimensional (2D histological sections with different stains, by combining methods of 2D multi-stain registration and 3D volumetric reconstruction from same stain sections. Setting and Design: The different strategies have been evaluated on two liver specimens (80 sections in total stained with Hematoxylin and Eosin (H and E, Sirius Red, and Cytokeratin (CK 7. Results and Conclusion: A strategy of using multi-stain registration to align images of a second stain to a volume reconstructed by same-stain registration results in the lowest overall error, although an interlaced image registration approach may be more robust to poor section quality.

  12. Unsupervised fuzzy segmentation of 3D magnetic resonance brain images

    Science.gov (United States)

    Velthuizen, Robert P.; Hall, Lawrence O.; Clarke, Laurence P.; Bensaid, Amine M.; Arrington, J. A.; Silbiger, Martin L.

    1993-07-01

    Unsupervised fuzzy methods are proposed for segmentation of 3D Magnetic Resonance images of the brain. Fuzzy c-means (FCM) has shown promising results for segmentation of single slices. FCM has been investigated for volume segmentations, both by combining results of single slices and by segmenting the full volume. Different strategies and initializations have been tried. In particular, two approaches have been used: (1) a method by which, iteratively, the furthest sample is split off to form a new cluster center, and (2) the traditional FCM in which the membership grade matrix is initialized in some way. Results have been compared with volume segmentations by k-means and with two supervised methods, k-nearest neighbors and region growing. Results of individual segmentations are presented as well as comparisons on the application of the different methods to a number of tumor patient data sets.

  13. Automatic 2D-to-3D image conversion using 3D examples from the internet

    Science.gov (United States)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  14. Effective classification of 3D image data using partitioning methods

    Science.gov (United States)

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  15. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  16. Flat-Panel Detector—Based Volume Computed Tomography: A Novel 3D Imaging Technique to Monitor Osteolytic Bone Lesions in a Mouse Tumor Metastasis Model

    Directory of Open Access Journals (Sweden)

    Jeannine Missbach-Guentner

    2007-09-01

    Full Text Available Skeletal metastasis is an important cause of mortality in patients with breast cancer. Hence, animal models, in combination with various imaging techniques, are in high demand for preclinical assessment of novel therapies. We evaluated the applicability of flat-panel volume computed tomography (fpVCT to noninvasive detection of osteolytic bone metastases that develop in severe immunodeficient mice after intracardial injection of MDA-MB-231 breast cancer cells. A single fpVCT scan at 200-wm isotropic resolution was employed to detect osteolysis within the entire skeleton. Osteolytic lesions identified by fpVCT correlated with Faxitron X-ray analysis and were subsequently confirmed by histopathological examination. Isotropic three-dimensional image data sets obtained by fpVCT were the basis for the precise visualization of the extent of the lesion within the cortical bone and for the measurement of bone loss. Furthermore, fpVCT imaging allows continuous monitoring of growth kinetics for each metastatic site and visualization of lesions in more complex regions of the skeleton, such as the skull. Our findings suggest that fpVCT is a powerful tool that can be used to monitor the occurrence and progression of osteolytic lesions in vivo and can be further developed to monitor responses to antimetastatic therapies over the course of the disease.

  17. FELIX 3D display: an interactive tool for volumetric imaging

    Science.gov (United States)

    Langhans, Knut; Bahr, Detlef; Bezecny, Daniel; Homann, Dennis; Oltmann, Klaas; Oltmann, Krischan; Guill, Christian; Rieper, Elisabeth; Ardey, Goetz

    2002-05-01

    The FELIX 3D display belongs to the class of volumetric displays using the swept volume technique. It is designed to display images created by standard CAD applications, which can be easily imported and interactively transformed in real-time by the FELIX control software. The images are drawn on a spinning screen by acousto-optic, galvanometric or polygon mirror deflection units with integrated lasers and a color mixer. The modular design of the display enables the user to operate with several equal or different projection units in parallel and to use appropriate screens for the specific purpose. The FELIX 3D display is a compact, light, extensible and easy to transport system. It mainly consists of inexpensive standard, off-the-shelf components for an easy implementation. This setup makes it a powerful and flexible tool to keep track with the rapid technological progress of today. Potential applications include imaging in the fields of entertainment, air traffic control, medical imaging, computer aided design as well as scientific data visualization.

  18. Dynamic contrast-enhanced 3D photoacoustic imaging

    Science.gov (United States)

    Wong, Philip; Kosik, Ivan; Carson, Jeffrey J. L.

    2013-03-01

    Photoacoustic imaging (PAI) is a hybrid imaging modality that integrates the strengths from both optical imaging and acoustic imaging while simultaneously overcoming many of their respective weaknesses. In previous work, we reported on a real-time 3D PAI system comprised of a 32-element hemispherical array of transducers. Using the system, we demonstrated the ability to capture photoacoustic data, reconstruct a 3D photoacoustic image, and display select slices of the 3D image every 1.4 s, where each 3D image resulted from a single laser pulse. The present study aimed to exploit the rapid imaging speed of an upgraded 3D PAI system by evaluating its ability to perform dynamic contrast-enhanced imaging. The contrast dynamics can provide rich datasets that contain insight into perfusion, pharmacokinetics and physiology. We captured a series of 3D PA images of a flow phantom before and during injection of piglet and rabbit blood. Principal component analysis was utilized to classify the data according to its spatiotemporal information. The results suggested that this technique can be used to separate a sequence of 3D PA images into a series of images representative of main features according to spatiotemporal flow dynamics.

  19. Light field display and 3D image reconstruction

    Science.gov (United States)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  20. Full Parallax Integral 3D Display and Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Byung-Gook Lee

    2015-02-01

    Full Text Available Purpose – Full parallax integral 3D display is one of the promising future displays that provide different perspectives according to viewing direction. In this paper, the authors review the recent integral 3D display and image processing techniques for improving the performance, such as viewing resolution, viewing angle, etc.Design/methodology/approach – Firstly, to improve the viewing resolution of 3D images in the integral imaging display with lenslet array, the authors present 3D integral imaging display with focused mode using the time-multiplexed display. Compared with the original integral imaging with focused mode, the authors use the electrical masks and the corresponding elemental image set. In this system, the authors can generate the resolution-improved 3D images with the n×n pixels from each lenslet by using n×n time-multiplexed display. Secondly, a new image processing technique related to the elemental image generation for 3D scenes is presented. With the information provided by the Kinect device, the array of elemental images for an integral imaging display is generated.Findings – From their first work, the authors improved the resolution of 3D images by using the time-multiplexing technique through the demonstration of the 24 inch integral imaging system. Authors’ method can be applied to a practical application. Next, the proposed method with the Kinect device can gain a competitive advantage over other methods for the capture of integral images of big 3D scenes. The main advantage of fusing the Kinect and the integral imaging concepts is the acquisition speed, and the small amount of handled data.Originality / Value – In this paper, the authors review their recent methods related to integral 3D display and image processing technique.Research type – general review.

  1. 3D Imaging with Structured Illumination for Advanced Security Applications

    Energy Technology Data Exchange (ETDEWEB)

    Birch, Gabriel Carisle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dagel, Amber Lynn [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kast, Brian A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Collin S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  2. 3D passive integral imaging using compressive sensing.

    Science.gov (United States)

    Cho, Myungjin; Mahalanobis, Abhijit; Javidi, Bahram

    2012-11-19

    Passive 3D sensing using integral imaging techniques has been well studied in the literature. It has been shown that a scene can be reconstructed at various depths using several 2D elemental images. This provides the ability to reconstruct objects in the presence of occlusions, and passively estimate their 3D profile. However, high resolution 2D elemental images are required for high quality 3D reconstruction. Compressive Sensing (CS) provides a way to dramatically reduce the amount of data that needs to be collected to form the elemental images, which in turn can reduce the storage and bandwidth requirements. In this paper, we explore the effects of CS in acquisition of the elemental images, and ultimately on passive 3D scene reconstruction and object recognition. Our experiments show that the performance of passive 3D sensing systems remains robust even when elemental images are recovered from very few compressive measurements.

  3. Intensity-based image registration for 3D spatial compounding using a freehand 3D ultrasound system

    Science.gov (United States)

    Pagoulatos, Niko; Haynor, David R.; Kim, Yongmin

    2002-04-01

    3D spatial compounding involves the combination of two or more 3D ultrasound (US) data sets, acquired under different insonation angles and windows, to form a higher quality 3D US data set. An important requirement for this method to succeed is the accurate registration between the US images used to form the final compounded image. We have developed a new automatic method for rigid and deformable registration of 3D US data sets, acquired using a freehand 3D US system. Deformation is provided by using a 3D thin-plate spline (TPS). Our method is fundamentally different from the previous ones in that the acquired scattered US 2D slices are registered and compounded directly into the 3D US volume. Our approach has several benefits over the traditional registration and spatial compounding methods: (i) we only peform one 3D US reconstruction, for the first acquired data set, therefore we save the computation time required to reconstruct subsequent acquired scans, (ii) for our registration we use (except for the first scan) the acquired high-resolution 2D US images rather than the 3D US reconstruction data which are of lower quality due to the interpolation and potential subsampling associated with 3D reconstruction, and (iii) the scans performed after the first one are not required to follow the typical 3D US scanning protocol, where a large number of dense slices have to be acquired; slices can be acquired in any fashion in areas where compounding is desired. We show that by taking advantage of the similar information contained in adjacent acquired 2D US slices, we can reduce the computation time of linear and nonlinear registrations by a factor of more than 7:1, without compromising registration accuracy. Furthermore, we implemented an adaptive approximation to the 3D TPS with local bilinear transformations allowing additional reduction of the nonlinear registration computation time by a factor of approximately 3.5. Our results are based on a commercially available

  4. 3D Objects Reconstruction from Image Data

    OpenAIRE

    Cír, Filip

    2008-01-01

    Tato práce se zabývá 3D rekonstrukcí z obrazových dat. Jsou popsány možnosti a přístupy k optickému skenování. Ruční optický 3D skener se skládá z kamery a zdroje čárového laseru, který je vzhledem ke kameře upevněn pod určitým úhlem. Je navržena vhodná podložka se značkami a je popsán algoritmus pro jejich real-time detekci. Po detekci značek lze vypočítat pozici a orientaci kamery. Na závěr je popsána detekce laseru a postup při výpočtu bodů na povrchu objektu pomocí triangulace. This pa...

  5. 3D augmented reality with integral imaging display

    Science.gov (United States)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  6. 3-D Image Analysis of Fluorescent Drug Binding

    Directory of Open Access Journals (Sweden)

    M. Raquel Miquel

    2005-01-01

    Full Text Available Fluorescent ligands provide the means of studying receptors in whole tissues using confocal laser scanning microscopy and have advantages over antibody- or non-fluorescence-based method. Confocal microscopy provides large volumes of images to be measured. Histogram analysis of 3-D image volumes is proposed as a method of graphically displaying large amounts of volumetric image data to be quickly analyzed and compared. The fluorescent ligand BODIPY FL-prazosin (QAPB was used in mouse aorta. Histogram analysis reports the amount of ligand-receptor binding under different conditions and the technique is sensitive enough to detect changes in receptor availability after antagonist incubation or genetic manipulations. QAPB binding was concentration dependent, causing concentration-related rightward shifts in the histogram. In the presence of 10 μM phenoxybenzamine (blocking agent, the QAPB (50 nM histogram overlaps the autofluorescence curve. The histogram obtained for the 1D knockout aorta lay to the left of that of control and 1B knockout aorta, indicating a reduction in 1D receptors. We have shown, for the first time, that it is possible to graphically display binding of a fluorescent drug to a biological tissue. Although our application is specific to adrenergic receptors, the general method could be applied to any volumetric, fluorescence-image-based assay.

  7. Advanced 3-D Ultrasound Imaging: 3-D Synthetic Aperture Imaging using Fully Addressed and Row-Column Addressed 2-D Transducer Arrays

    DEFF Research Database (Denmark)

    Bouzari, Hamed

    companies have produced ultrasound scanners using 2-D transducer arrays with enough transducer elements to produce high quality 3-D images. Because of the large matrix transducers with integrated custom electronics, these systems are extremely expensive. The relatively low price of ultrasound scanners......Compared with conventional 2-D ultrasound imaging, real-time 3-D (or 4-D) ultrasound imaging has several advantages, resulting in a significant progress in the ultrasound imaging instrumentation over the past decade. Viewing the patient’s anatomy as a volume helps physicians to comprehend...... the important diagnostic information in a noninvasive manner. Diagnostic and therapeutic decisions often require accurate estimates of e.g., organ, cyst, or tumor volumes. 3-D ultrasound imaging can provide these measurements without relying on the geometrical assumptions and operator-dependent skills involved...

  8. Calibration of Images with 3D range scanner data

    OpenAIRE

    Adalid López, Víctor Javier

    2009-01-01

    Projecte fet en col.laboració amb EPFL 3D laser range scanners are used in extraction of the 3D data in a scene. Main application areas are architecture, archeology and city planning. Thought the raw scanner data has a gray scale values, the 3D data can be merged with colour camera image values to get textured 3D model of the scene. Also these devices are able to take a reliable copy in 3D form objects, with a high level of accuracy. Therefore, they scanned scenes can be use...

  9. 3D Ground Penetrating Imaging Radar

    OpenAIRE

    ECT Team, Purdue

    2007-01-01

    GPiR (ground-penetrating imaging radar) is a new technology for mapping the shallow subsurface, including society’s underground infrastructure. Applications for this technology include efficient and precise mapping of buried utilities on a large scale.

  10. Compression of 3D integral images using wavelet decomposition

    Science.gov (United States)

    Mazri, Meriem; Aggoun, Amar

    2003-06-01

    This paper presents a wavelet-based lossy compression technique for unidirectional 3D integral images (UII). The method requires the extraction of different viewpoint images from the integral image. A single viewpoint image is constructed by extracting one pixel from each microlens, then each viewpoint image is decomposed using a Two Dimensional Discrete Wavelet Transform (2D-DWT). The resulting array of coefficients contains several frequency bands. The lower frequency bands of the viewpoint images are assembled and compressed using a 3 Dimensional Discrete Cosine Transform (3D-DCT) followed by Huffman coding. This will achieve decorrelation within and between 2D low frequency bands from the different viewpoint images. The remaining higher frequency bands are Arithmetic coded. After decoding and decompression of the viewpoint images using an inverse 3D-DCT and an inverse 2D-DWT, each pixel from every reconstructed viewpoint image is put back into its original position within the microlens to reconstruct the whole 3D integral image. Simulations were performed on a set of four different grey level 3D UII using a uniform scalar quantizer with deadzone. The results for the average of the four UII intensity distributions are presented and compared with previous use of 3D-DCT scheme. It was found that the algorithm achieves better rate-distortion performance, with respect to compression ratio and image quality at very low bit rates.

  11. Highway 3D model from image and lidar data

    Science.gov (United States)

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  12. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Energy Technology Data Exchange (ETDEWEB)

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  13. 3D laser imaging for concealed object identification

    Science.gov (United States)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  14. Automated 3D renal segmentation based on image partitioning

    Science.gov (United States)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  15. Acoustic 3D imaging of dental structures

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Hume, W.R. [California Univ., Los Angeles, CA (United States); Douglass, G.D. [California Univ., San Francisco, CA (United States)

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  16. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    Energy Technology Data Exchange (ETDEWEB)

    Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van [Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Punt, Mark; Aben, Jean-Paul [Pie Medical Imaging, 6227 AJ Maastricht (Netherlands); Schultz, Carl [Department of Cardiology, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Niessen, Wiro [Quantitative Imaging Group, Faculty of Applied Sciences, Delft University of Technology, 2628 CJ Delft, The Netherlands and Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands)

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  17. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    Science.gov (United States)

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.

  18. Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Directory of Open Access Journals (Sweden)

    Alexander Pacheco

    2014-05-01

    Full Text Available To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects.

  19. 3D Motion Parameters Determination Based on Binocular Sequence Images

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Exactly capturing three dimensional (3D) motion information of an object is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, a binocular vision system and a method for determining 3D motion parameters of an object from binocular sequence images are introduced. The main steps include camera calibration, the matching of motion and stereo images, 3D feature point correspondences and resolving the motion parameters. Finally, the experimental results of acquiring the motion parameters of the objects with uniform velocity and acceleration in the straight line based on the real binocular sequence images by the mentioned method are presented.

  20. 3D Shape Indexing and Retrieval Using Characteristics level images

    Directory of Open Access Journals (Sweden)

    Abdelghni Lakehal

    2012-05-01

    Full Text Available In this paper, we propose an improved version of the descriptor that we proposed before. The descriptor is based on a set of binary images extracted from the 3D model called level images noted LI. The set LI is often bulky, why we introduced the X-means technique to reduce its size instead of K-means used in the old version. A 2D binary image descriptor was introduced to extract the vectors descriptors of the 3D model. For a comparative study of two versions of the descriptor, we used the National Taiwan University (NTU database of 3D object.

  1. Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization

    Science.gov (United States)

    2014-05-01

    1 Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization David N. Ford...2014 4. TITLE AND SUBTITLE Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization 5a...Manufacturing ( 3D printing ) 2 Research Context Problem: Learning curve savings forecasted in SHIPMAIN maintenance initiative have not materialized

  2. Preliminary examples of 3D vector flow imaging

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes; Stuart, Matthias Bo; Tomov, Borislav Gueorguiev

    2013-01-01

    This paper presents 3D vector flow images obtained using the 3D Transverse Oscillation (TO) method. The method employs a 2D transducer and estimates the three velocity components simultaneously, which is important for visualizing complex flow patterns. Data are acquired using the experimental ult...

  3. 3D quantitative phase imaging of neural networks using WDT

    Science.gov (United States)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  4. New approach to the perception of 3D shape based on veridicality, complexity, symmetry and volume.

    Science.gov (United States)

    Pizlo, Zygmunt; Sawada, Tadamasa; Li, Yunfeng; Kropatsch, Walter G; Steinman, Robert M

    2010-01-01

    This paper reviews recent progress towards understanding 3D shape perception made possible by appreciating the significant role that veridicality and complexity play in the natural visual environment. The ability to see objects as they really are "out there" is derived from the complexity inherent in the 3D object's shape. The importance of both veridicality and complexity was ignored in most prior research. Appreciating their importance made it possible to devise a computational model that recovers the 3D shape of an object from only one of its 2D images. This model uses a simplicity principle consisting of only four a priori constraints representing properties of 3D shapes, primarily their symmetry and volume. The model recovers 3D shapes from a single 2D image as well, and sometimes even better, than a human being. In the rare recoveries in which errors are observed, the errors made by the model and human subjects are very similar. The model makes no use of depth, surfaces or learning. Recent elaborations of this model include: (i) the recovery of the shapes of natural objects, including human and animal bodies with limbs in varying positions (ii) providing the model with two input images that allowed it to achieve virtually perfect shape constancy from almost all viewing directions. The review concludes with a comparison of some of the highlights of our novel, successful approach to the recovery of 3D shape from a 2D image with prior, less successful approaches.

  5. Myocardial strains from 3D displacement encoded magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Kindberg Katarina

    2012-04-01

    Full Text Available Abstract Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE, make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts.

  6. Image based 3D city modeling : Comparative study

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  7. A colour image reproduction framework for 3D colour printing

    Science.gov (United States)

    Xiao, Kaida; Sohiab, Ali; Sun, Pei-li; Yates, Julian M.; Li, Changjun; Wuerger, Sophie

    2016-10-01

    In this paper, the current technologies in full colour 3D printing technology were introduced. A framework of colour image reproduction process for 3D colour printing is proposed. A special focus was put on colour management for 3D printed objects. Two approaches, colorimetric colour reproduction and spectral based colour reproduction are proposed in order to faithfully reproduce colours in 3D objects. Two key studies, colour reproduction for soft tissue prostheses and colour uniformity correction across different orientations are described subsequently. Results are clear shown that applying proposed colour image reproduction framework, performance of colour reproduction can be significantly enhanced. With post colour corrections, a further improvement in colour process are achieved for 3D printed objects.

  8. 3D Image Modelling and Specific Treatments in Orthodontics Domain

    Directory of Open Access Journals (Sweden)

    Dionysis Goularas

    2007-01-01

    Full Text Available In this article, we present a 3D specific dental plaster treatment system for orthodontics. From computer tomography scanner images, we propose first a 3D image modelling and reconstruction method of the Mandible and Maxillary based on an adaptive triangulation allowing management of contours meant for the complex topologies. Secondly, we present two specific treatment methods directly achieved on obtained 3D model allowing the automatic correction for the setting in occlusion of the Mandible and the Maxillary, and the teeth segmentation allowing more specific dental examinations. Finally, these specific treatments are presented via a client/server application with the aim of allowing a telediagnosis and treatment.

  9. MULTI-SPECTRAL AND HYPERSPECTRAL IMAGE FUSION USING 3-D WAVELET TRANSFORM

    Institute of Scientific and Technical Information of China (English)

    Zhang Yifan; He Mingyi

    2007-01-01

    Image fusion is performed between one band of multi-spectral image and two bands of hyperspectral image to produce fused image with the same spatial resolution as source multi-spectral image and the same spectral resolution as source hyperspectral image. According to the characteristics and 3-Dimensional (3-D) feature analysis of multi-spectral and hyperspectral image data volume, the new fusion approach using 3-D wavelet based method is proposed. This approach is composed of four major procedures: Spatial and spectral resampling, 3-D wavelet transform, wavelet coefficient integration and 3-D inverse wavelet transform. Especially, a novel method, Ratio Image Based Spectral Resampling (RIBSR) method, is proposed to accomplish data resampling in spectral domain by utilizing the property of ratio image. And a new fusion rule, Average and Substitution (A&S) rule, is employed as the fusion rule to accomplish wavelet coefficient integration. Experimental results illustrate that the fusion approach using 3-D wavelet transform can utilize both spatial and spectral characteristics of source images more adequately and produce fused image with higher quality and fewer artifacts than fusion approach using 2-D wavelet transform. It is also revealed that RIBSR method is capable of interpolating the missing data more effectively and correctly, and A&S rule can integrate coefficients of source images in 3-D wavelet domain to preserve both spatial and spectral features of source images more properly.

  10. Progresses in 3D integral imaging with optical processing

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Corral, Manuel; Martinez-Cuenca, Raul; Saavedra, Genaro; Navarro, Hector; Pons, Amparo [Department of Optics. University of Valencia. Calle Doctor Moliner 50, E46 100, Burjassot (Spain); Javidi, Bahram [Electrical and Computer Engineering Department, University of Connecticut, Storrs, CT 06269-1157 (United States)], E-mail: manuel.martinez@uv.es

    2008-11-01

    Integral imaging is a promising technique for the acquisition and auto-stereoscopic display of 3D scenes with full parallax and without the need of any additional devices like special glasses. First suggested by Lippmann in the beginning of the 20th century, integral imaging is based in the intersection of ray cones emitted by a collection of 2D elemental images which store the 3D information of the scene. This paper is devoted to the study, from the ray optics point of view, of the optical effects and interaction with the observer of integral imaging systems.

  11. DCT and DST Based Image Compression for 3D Reconstruction

    Science.gov (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-03-01

    This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.

  12. Determining 3D flow fields via multi-camera light field imaging.

    Science.gov (United States)

    Truscott, Tadd T; Belden, Jesse; Nielson, Joseph R; Daily, David J; Thomson, Scott L

    2013-03-06

    In the field of fluid mechanics, the resolution of computational schemes has outpaced experimental methods and widened the gap between predicted and observed phenomena in fluid flows. Thus, a need exists for an accessible method capable of resolving three-dimensional (3D) data sets for a range of problems. We present a novel technique for performing quantitative 3D imaging of many types of flow fields. The 3D technique enables investigation of complicated velocity fields and bubbly flows. Measurements of these types present a variety of challenges to the instrument. For instance, optically dense bubbly multiphase flows cannot be readily imaged by traditional, non-invasive flow measurement techniques due to the bubbles occluding optical access to the interior regions of the volume of interest. By using Light Field Imaging we are able to reparameterize images captured by an array of cameras to reconstruct a 3D volumetric map for every time instance, despite partial occlusions in the volume. The technique makes use of an algorithm known as synthetic aperture (SA) refocusing, whereby a 3D focal stack is generated by combining images from several cameras post-capture (1). Light Field Imaging allows for the capture of angular as well as spatial information about the light rays, and hence enables 3D scene reconstruction. Quantitative information can then be extracted from the 3D reconstructions using a variety of processing algorithms. In particular, we have developed measurement methods based on Light Field Imaging for performing 3D particle image velocimetry (PIV), extracting bubbles in a 3D field and tracking the boundary of a flickering flame. We present the fundamentals of the Light Field Imaging methodology in the context of our setup for performing 3DPIV of the airflow passing over a set of synthetic vocal folds, and show representative results from application of the technique to a bubble-entraining plunging jet.

  13. Field lens multiplexing in holographic 3D displays by using Bragg diffraction based volume gratings

    Science.gov (United States)

    Fütterer, G.

    2016-11-01

    Applications, which can profit from holographic 3D displays, are the visualization of 3D data, computer-integrated manufacturing, 3D teleconferencing and mobile infotainment. However, one problem of holographic 3D displays, which are e.g. based on space bandwidth limited reconstruction of wave segments, is to realize a small form factor. Another problem is to provide a reasonable large volume for the user placement, which means to provide an acceptable freedom of movement. Both problems should be solved without decreasing the image quality of virtual and real object points, which are generated within the 3D display volume. A diffractive optical design using thick hologram gratings, which can be referred to as Bragg diffraction based volume gratings, can provide a small form factor and high definition natural viewing experience of 3D objects. A large collimated wave can be provided by an anamorphic backlight unit. The complex valued spatial light modulator add local curvatures to the wave field he is illuminated with. The modulated wave field is focused onto to the user plane by using a volume grating based field lens. Active type liquid crystal gratings provide 1D fine tracking of approximately +/- 8° deg. Diffractive multiplex has to be implemented for each color and for a set of focus functions providing coarse tracking. Boundary conditions of the diffractive multiplexing are explained. This is done in regards to the display layout and by using the coupled wave theory (CWT). Aspects of diffractive cross talk and its suppression will be discussed including longitudinal apodized volume gratings.

  14. 3D Medical Image Segmentation Based on Rough Set Theory

    Institute of Scientific and Technical Information of China (English)

    CHEN Shi-hao; TIAN Yun; WANG Yi; HAO Chong-yang

    2007-01-01

    This paper presents a method which uses multiple types of expert knowledge together in 3D medical image segmentation based on rough set theory. The focus of this paper is how to approximate a ROI (region of interest) when there are multiple types of expert knowledge. Based on rough set theory, the image can be split into three regions:positive regions; negative regions; boundary regions. With multiple knowledge we refine ROI as an intersection of all of the expected shapes with single knowledge. At last we show the results of implementing a rough 3D image segmentation and visualization system.

  15. 3D Images of Materials Structures Processing and Analysis

    CERN Document Server

    Ohser, Joachim

    2009-01-01

    Taking and analyzing images of materials' microstructures is essential for quality control, choice and design of all kind of products. Today, the standard method still is to analyze 2D microscopy images. But, insight into the 3D geometry of the microstructure of materials and measuring its characteristics become more and more prerequisites in order to choose and design advanced materials according to desired product properties. This first book on processing and analysis of 3D images of materials structures describes how to develop and apply efficient and versatile tools for geometric analysis

  16. A Texture Analysis of 3D Radar Images

    NARCIS (Netherlands)

    Deiana, D.; Yarovoy, A.

    2009-01-01

    In this paper a texture feature coding method to be applied to high-resolution 3D radar images in order to improve target detection is developed. An automatic method for image segmentation based on texture features is proposed. The method has been able to automatically detect weak targets which fail

  17. A high-level 3D visualization API for Java and ImageJ

    Directory of Open Access Journals (Sweden)

    Longair Mark

    2010-05-01

    Full Text Available Abstract Background Current imaging methods such as Magnetic Resonance Imaging (MRI, Confocal microscopy, Electron Microscopy (EM or Selective Plane Illumination Microscopy (SPIM yield three-dimensional (3D data sets in need of appropriate computational methods for their analysis. The reconstruction, segmentation and registration are best approached from the 3D representation of the data set. Results Here we present a platform-independent framework based on Java and Java 3D for accelerated rendering of biological images. Our framework is seamlessly integrated into ImageJ, a free image processing package with a vast collection of community-developed biological image analysis tools. Our framework enriches the ImageJ software libraries with methods that greatly reduce the complexity of developing image analysis tools in an interactive 3D visualization environment. In particular, we provide high-level access to volume rendering, volume editing, surface extraction, and image annotation. The ability to rely on a library that removes the low-level details enables concentrating software development efforts on the algorithm implementation parts. Conclusions Our framework enables biomedical image software development to be built with 3D visualization capabilities with very little effort. We offer the source code and convenient binary packages along with extensive documentation at http://3dviewer.neurofly.de.

  18. DATA PROCESSING TECHNOLOGY OF AIRBORNE 3D IMAGE

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Airborne 3D image which integrates GPS,attitude measurement unit (AMU),sca nning laser rangefinder (SLR) and spectral scanner has been developed successful ly.The spectral scanner and SLR use the same optical system which ensures laser point to match pixel seamlessly.The distinctive advantage of 3D image is that it can produce geo_referenced images and DSM (digital surface models) images wi thout any ground control points (GCPs).It is no longer necessary to sur vey GCPs and with some softwares the data can be processed and produce digital s urface models (DSM) and geo_referenced images in quasi_real_time,therefore,the efficiency of 3 D image is 10~100 times higher than that of traditional approaches.The process ing procedure involves decomposing and checking the raw data,processing GPS dat a,calculating the positions of laser sample points,producing geo_referenced im age,producing DSM and mosaicing strips.  The principle of 3D image is first introduced in this paper,and then we focus on the fast processing technique and algorithm.The flight tests and processed r esults show that the processing technique is feasible and can meet the requireme nt of quasi_real_time applications.

  19. Interactive visualization of multiresolution image stacks in 3D.

    Science.gov (United States)

    Trotts, Issac; Mikula, Shawn; Jones, Edward G

    2007-04-15

    Conventional microscopy, electron microscopy, and imaging techniques such as MRI and PET commonly generate large stacks of images of the sectioned brain. In other domains, such as neurophysiology, variables such as space or time are also varied along a stack axis. Digital image sizes have been progressively increasing and in virtual microscopy, it is now common to work with individual image sizes that are several hundred megapixels and several gigabytes in size. The interactive visualization of these high-resolution, multiresolution images in 2D has been addressed previously [Sullivan, G., and Baker, R., 1994. Efficient quad-tree coding of images and video. IEEE Trans. Image Process. 3 (3), 327-331]. Here, we describe a method for interactive visualization of multiresolution image stacks in 3D. The method, characterized as quad-tree based multiresolution image stack interactive visualization using a texel projection based criterion, relies on accessing and projecting image tiles from multiresolution image stacks in such a way that, from the observer's perspective, image tiles all appear approximately the same size even though they are accessed from different tiers within the images comprising the stack. This method enables efficient navigation of high-resolution image stacks. We implement this method in a program called StackVis, which is a Windows-based, interactive 3D multiresolution image stack visualization system written in C++ and using OpenGL. It is freely available at http://brainmaps.org.

  20. Simultaneous whole-animal 3D-imaging of neuronal activity using light field microscopy

    CERN Document Server

    Prevedel, R; Hoffmann, M; Pak, N; Wetzstein, G; Kato, S; Schrödel, T; Raskar, R; Zimmer, M; Boyden, E S; Vaziri, A

    2014-01-01

    3D functional imaging of neuronal activity in entire organisms at single cell level and physiologically relevant time scales faces major obstacles due to trade-offs between the size of the imaged volumes, and spatial and temporal resolution. Here, using light-field microscopy in combination with 3D deconvolution, we demonstrate intrinsically simultaneous volumetric functional imaging of neuronal population activity at single neuron resolution for an entire organism, the nematode Caenorhabditis elegans. The simplicity of our technique and possibility of the integration into epi-fluoresence microscopes makes it an attractive tool for high-speed volumetric calcium imaging.

  1. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    Directory of Open Access Journals (Sweden)

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  2. Multi-layer 3D imaging using a few viewpoint images and depth map

    Science.gov (United States)

    Suginohara, Hidetsugu; Sakamoto, Hirotaka; Yamanaka, Satoshi; Suyama, Shiro; Yamamoto, Hirotsugu

    2015-03-01

    In this paper, we propose a new method that makes multi-layer images from a few viewpoint images to display a 3D image by the autostereoscopic display that has multiple display screens in the depth direction. We iterate simple "Shift and Subtraction" processes to make each layer image alternately. The image made in accordance with depth map like a volume slicing by gradations is used as the initial solution of iteration process. Through the experiments using the prototype stacked two LCDs, we confirmed that it was enough to make multi-layer images from three viewpoint images to display a 3D image. Limiting the number of viewpoint images, the viewing area that allows stereoscopic view becomes narrow. To broaden the viewing area, we track the head motion of the viewer and update screen images in real time so that the viewer can maintain correct stereoscopic view within +/- 20 degrees area. In addition, we render pseudo multiple viewpoint images using depth map, then we can generate motion parallax at the same time.

  3. 3D- VISUALIZATION BY RAYTRACING IMAGE SYNTHESIS ON GPU

    Directory of Open Access Journals (Sweden)

    Al-Oraiqat Anas M.

    2016-06-01

    Full Text Available This paper presents a realization of the approach to spatial 3D stereo of visualization of 3D images with use parallel Graphics processing unit (GPU. The experiments of realization of synthesis of images of a 3D stage by a method of trace of beams on GPU with Compute Unified Device Architecture (CUDA have shown that 60 % of the time is spent for the decision of a computing problem approximately, the major part of time (40 % is spent for transfer of data between the central processing unit and GPU for calculations and the organization process of visualization. The study of the influence of increase in the size of the GPU network at the speed of calculations showed importance of the correct task of structure of formation of the parallel computer network and general mechanism of parallelization.

  4. Autonomous Planetary 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    is discussed.Based on such features, 3-D representations may be compiled from two or more 2-D satellite images. The main purposes of such a mapping system are extraction of landing sites, objects of scientific interest and general planetary surveying. All data processing is performed autonomously onboard...

  5. Integration of real-time 3D image acquisition and multiview 3D display

    Science.gov (United States)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  6. Automated 3D ultrasound image segmentation to aid breast cancer image interpretation.

    Science.gov (United States)

    Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A; Yuan, Jie; Wang, Xueding; Carson, Paul L

    2016-02-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  7. Extracting 3D Layout From a Single Image Using Global Image Structures

    NARCIS (Netherlands)

    Lou, Z.; Gevers, T.; Hu, N.

    2015-01-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very b

  8. Projective 3D-reconstruction of Uncalibrated Endoscopic Images

    Directory of Open Access Journals (Sweden)

    P. Faltin

    2010-01-01

    Full Text Available The most common medical diagnostic method for urinary bladder cancer is cystoscopy. This inspection of the bladder is performed by a rigid endoscope, which is usually guided close to the bladder wall. This causes a very limited field of view; difficulty of navigation is aggravated by the usage of angled endoscopes. These factors cause difficulties in orientation and visual control. To overcome this problem, the paper presents a method for extracting 3D information from uncalibrated endoscopic image sequences and for reconstructing the scene content. The method uses the SURF-algorithm to extract features from the images and relates the images by advanced matching. To stabilize the matching, the epipolar geometry is extracted for each image pair using a modified RANSAC-algorithm. Afterwards these matched point pairs are used to generate point triplets over three images and to describe the trifocal geometry. The 3D scene points are determined by applying triangulation to the matched image points. Thus, these points are used to generate a projective 3D reconstruction of the scene, and provide the first step for further metric reconstructions.

  9. 3D Medical Volume Segmentation Using Hybrid Multiresolution Statistical Approaches

    Directory of Open Access Journals (Sweden)

    Shadi AlZu'bi

    2010-01-01

    that 3D methodologies can accurately detect the Region Of Interest (ROI. Automatic segmentation has been achieved using HMMs where the ROI is detected accurately but suffers a long computation time for its calculations.

  10. Deformable Surface 3D Reconstruction from Monocular Images

    CERN Document Server

    Salzmann, Matthieu

    2010-01-01

    Being able to recover the shape of 3D deformable surfaces from a single video stream would make it possible to field reconstruction systems that run on widely available hardware without requiring specialized devices. However, because many different 3D shapes can have virtually the same projection, such monocular shape recovery is inherently ambiguous. In this survey, we will review the two main classes of techniques that have proved most effective so far: The template-based methods that rely on establishing correspondences with a reference image in which the shape is already known, and non-rig

  11. Quantitative Morphological and Biochemical Studies on Human Downy Hairs using 3-D Quantitative Phase Imaging

    CERN Document Server

    Lee, SangYun; Lee, Yuhyun; Park, Sungjin; Shin, Heejae; Yang, Jongwon; Ko, Kwanhong; Park, HyunJoo; Park, YongKeun

    2015-01-01

    This study presents the morphological and biochemical findings on human downy arm hairs using 3-D quantitative phase imaging techniques. 3-D refractive index tomograms and high-resolution 2-D synthetic aperture images of individual downy arm hairs were measured using a Mach-Zehnder laser interferometric microscopy equipped with a two-axis galvanometer mirror. From the measured quantitative images, the biochemical and morphological parameters of downy hairs were non-invasively quantified including the mean refractive index, volume, cylinder, and effective radius of individual hairs. In addition, the effects of hydrogen peroxide on individual downy hairs were investigated.

  12. Investigation on the 3 D geometric accuracy and on the image quality (MTF, SNR and NPS) of volume tomography units (CT, CBCT and DVT); Untersuchung zur geometrischen 3-D-Genauigkeit und zur Bildqualitaet (MTF, SRV und W) von Volumentomografie-Einrichtungen (CT, CBCT und DVT)

    Energy Technology Data Exchange (ETDEWEB)

    Blendl, C.; Selbach, M.; Uphoff, C. [Fachhochschule Koeln (Germany). Inst. fuer Medien- und Phototechnik; Fiebich, M.; Voigt, J.M. [Fachhochschule Giessen (DE). Inst. fuer Medizinische Physik und Strahlenschutz (IMPS)

    2012-01-15

    Purpose: The study aims at investigating how far image quality (MTF and NPS) differs in between CT, CBCT and DVT units and how far the geometrical 3 D accuracy and the HU calibration differ in respect to surgical or radio therapeutic planning. Materials and Methods: X ray image stacks have been made using a new designed test device which contains structures for measuring MTF, NPS, the 3 D accuracy and the Hounsfield calibration (jaw or skull program). The image stacks of the transversal images were analyzed with a dedicated computer program. Results: The MTF values are correlated with the physical resolution (CT and DVT) and are influenced by the used Kernel (CT). The NPS values are limited to an intra system comparison due to the insufficient HU accuracy. The 3 D accuracy is comparable in between the system types. Conclusions: The values of image quality are not yet correlated with dose values: NPS. Investigations to an appropriate dosimetry are ongoing to establish the ratio between dose and image quality (ALARA principle). No fundamental difference between the systems can be stated in respect radio therapeutic planning: improper HU calibration accuracy in CBCT and DVT units. The geometric 3 D accuracy of high performance DVT systems is greater than that of CT Systems. (orig.)

  13. A novel modeling method for manufacturing hearing aid using 3D medical images

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyeong Gyun [Dept of Radiological Science, Far East University, Eumseong (Korea, Republic of)

    2016-06-15

    This study aimed to suggest a novel method of modeling a hearing aid ear shell based on Digital Imaging and Communication in Medicine (DICOM) in the hearing aid ear shell manufacturing method using a 3D printer. In the experiment, a 3D external auditory meatus was extracted by using the critical values in the DICOM volume images, a nd t he modeling surface structures were compared in standard type STL (STereoLithography) files which could be recognized by a 3D printer. In this 3D modeling method, a conventional ear model was prepared, and the gaps between adjacent isograms produced by a 3D scanner were filled with 3D surface fragments to express the modeling structure. In this study, the same type of triangular surface structures were prepared by using the DICOM images. The result showed that the modeling surface structure based on the DICOM images provide the same environment that the conventional 3D printers may recognize, eventually enabling to print out the hearing aid ear shell shape.

  14. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  15. Optical-CT imaging of complex 3D dose distributions

    Science.gov (United States)

    Oldham, Mark; Kim, Leonard; Hugo, Geoffrey

    2005-04-01

    The limitations of conventional dosimeters restrict the comprehensiveness of verification that can be performed for advanced radiation treatments presenting an immediate and substantial problem for clinics attempting to implement these techniques. In essence, the rapid advances in the technology of radiation delivery have not been paralleled by corresponding advances in the ability to verify these treatments. Optical-CT gel-dosimetry is a relatively new technique with potential to address this imbalance by providing high resolution 3D dose maps in polymer and radiochromic gel dosimeters. We have constructed a 1st generation optical-CT scanner capable of high resolution 3D dosimetry and applied it to a number of simple and increasingly complex dose distributions including intensity-modulated-radiation-therapy (IMRT). Prior to application to IMRT, the robustness of optical-CT gel dosimetry was investigated on geometry and variable attenuation phantoms. Physical techniques and image processing methods were developed to minimize deleterious effects of refraction, reflection, and scattered laser light. Here we present results of investigations into achieving accurate high-resolution 3D dosimetry with optical-CT, and show clinical examples of 3D IMRT dosimetry verification. In conclusion, optical-CT gel dosimetry can provide high resolution 3D dose maps that greatly facilitate comprehensive verification of complex 3D radiation treatments. Good agreement was observed at high dose levels (>50%) between planned and measured dose distributions. Some systematic discrepancies were observed however (rms discrepancy 3% at high dose levels) indicating further work is required to eliminate confounding factors presently compromising the accuracy of optical-CT 3D gel-dosimetry.

  16. Fresnel Volume Migration of the ISO89-3D data set

    Science.gov (United States)

    Hloušek, F.; Buske, S.

    2016-11-01

    This paper demonstrates the capabilities of Fresnel Volume Migration (FVM) for 3-D single-component seismic data in a crystalline environment. We show its application to the ISO89-3D data set, which was acquired in 1989 at the German continental deep drilling site (KTB) near Windischeschenbach (Southeast Germany). A key point in FVM is the derivation of the emergent angle for the recorded wavefield. This angle is used as the initial condition of the ray-tracing-algorithm within FVM. In order to limit the migration operator to the physically relevant part of a reflector, it is restricted to the Fresnel-volume around the backpropagated ray. We discuss different possibilities for an adequate choice of the used aperture for a local slant-stack algorithm using the semblance as a measure of the coherency for different emergent angles. Furthermore, we reduce the number of used receivers for this procedure using the Voronoi diagram, thereby leading to a more equal distribution of the receivers within the selected aperture. We demonstrate the performance of these methods for a simple 3-D synthetic example and show the results for the ISO89-3D data set. For the latter, our approach yields images of significantly better quality compared to previous investigations and allows for a detailed characterization of the subsurface. Even in migrated single shot gathers, structures are clearly visible due to the focusing achieved by FVM.

  17. Vhrs Stereo Images for 3d Modelling of Buildings

    Science.gov (United States)

    Bujakiewicz, A.; Holc, M.

    2012-07-01

    The paper presents the project which was carried out in the Photogrammetric Laboratory of Warsaw University of Technology. The experiment is concerned with the extraction of 3D vector data for buildings creation from 3D photogrammetric model based on the Ikonos stereo images. The model was reconstructed with photogrammetric workstation - Summit Evolution combined with ArcGIS 3D platform. Accuracy of 3D model was significantly improved by use for orientation of pair of satellite images the stereo measured tie points distributed uniformly around the model area in addition to 5 control points. The RMS for model reconstructed on base of the RPC coefficients only were 16,6 m, 2,7 m and 47,4 m, for X, Y and Z coordinates, respectively. By addition of 5 control points the RMS were improved to 0,7 m, 0,7 m 1,0 m, where the best results were achieved when RMS were estimated from deviations in 17 check points (with 5 control points)and amounted to 0,4 m, 0,5 m and 0,6 m, for X, Y, and Z respectively. The extracted 3D vector data for buildings were integrated with 2D data of the ground footprints and afterwards they were used for 3D modelling of buildings in Google SketchUp software. The final results were compared with the reference data obtained from other sources. It was found that the shape of buildings (in concern to the number of details) had been reconstructed on level of LoD1, when the accuracy of these models corresponded to the level of LoD2.

  18. VHRS STEREO IMAGES FOR 3D MODELLING OF BUILDINGS

    Directory of Open Access Journals (Sweden)

    A. Bujakiewicz

    2012-07-01

    Full Text Available The paper presents the project which was carried out in the Photogrammetric Laboratory of Warsaw University of Technology. The experiment is concerned with the extraction of 3D vector data for buildings creation from 3D photogrammetric model based on the Ikonos stereo images. The model was reconstructed with photogrammetric workstation – Summit Evolution combined with ArcGIS 3D platform. Accuracy of 3D model was significantly improved by use for orientation of pair of satellite images the stereo measured tie points distributed uniformly around the model area in addition to 5 control points. The RMS for model reconstructed on base of the RPC coefficients only were 16,6 m, 2,7 m and 47,4 m, for X, Y and Z coordinates, respectively. By addition of 5 control points the RMS were improved to 0,7 m, 0,7 m 1,0 m, where the best results were achieved when RMS were estimated from deviations in 17 check points (with 5 control pointsand amounted to 0,4 m, 0,5 m and 0,6 m, for X, Y, and Z respectively. The extracted 3D vector data for buildings were integrated with 2D data of the ground footprints and afterwards they were used for 3D modelling of buildings in Google SketchUp software. The final results were compared with the reference data obtained from other sources. It was found that the shape of buildings (in concern to the number of details had been reconstructed on level of LoD1, when the accuracy of these models corresponded to the level of LoD2.

  19. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    Science.gov (United States)

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.

  20. 3D Reconstruction from X-ray Fluoroscopy for Clinical Veterinary Medicine using Differential Volume Rendering

    Science.gov (United States)

    Khongsomboon, Khamphong; Hamamoto, Kazuhiko; Kondo, Shozo

    3D reconstruction from ordinary X-ray equipment which is not CT or MRI is required in clinical veterinary medicine. Authors have already proposed a 3D reconstruction technique from X-ray photograph to present bone structure. Although the reconstruction is useful for veterinary medicine, the thechnique has two problems. One is about exposure of X-ray and the other is about data acquisition process. An x-ray equipment which is not special one but can solve the problems is X-ray fluoroscopy. Therefore, in this paper, we propose a method for 3D-reconstruction from X-ray fluoroscopy for clinical veterinary medicine. Fluoroscopy is usually used to observe a movement of organ or to identify a position of organ for surgery by weak X-ray intensity. Since fluoroscopy can output a observed result as movie, the previous two problems which are caused by use of X-ray photograph can be solved. However, a new problem arises due to weak X-ray intensity. Although fluoroscopy can present information of not only bone structure but soft tissues, the contrast is very low and it is very difficult to recognize some soft tissues. It is very useful to be able to observe not only bone structure but soft tissues clearly by ordinary X-ray equipment in the field of clinical veterinary medicine. To solve this problem, this paper proposes a new method to determine opacity in volume rendering process. The opacity is determined according to 3D differential coefficient of 3D reconstruction. This differential volume rendering can present a 3D structure image of multiple organs volumetrically and clearly for clinical veterinary medicine. This paper shows results of simulation and experimental investigation of small dog and evaluation by veterinarians.

  1. In vivo validation of a 3D ultrasound system for imaging the lateral ventricles of neonates

    Science.gov (United States)

    Kishimoto, J.; Fenster, A.; Chen, N.; Lee, D.; de Ribaupierre, S.

    2014-03-01

    Dilated lateral ventricles in neonates can be due to many different causes, such as brain loss, or congenital malformation; however, the main cause is hydrocephalus, which is the accumulation of fluid within the ventricular system. Hydrocephalus can raise intracranial pressure resulting in secondary brain damage, and up to 25% of patients with severely enlarged ventricles have epilepsy in later life. Ventricle enlargement is clinically monitored using 2D US through the fontanels. The sensitivity of 2D US to dilation is poor because it cannot provide accurate measurements of irregular volumes such as the ventricles, so most clinical evaluations are of a qualitative nature. We developed a 3D US system to image the cerebral ventricles of neonates within the confines of incubators that can be easily translated to more open environments. Ventricle volumes can be segmented from these images giving a quantitative volumetric measurement of ventricle enlargement without moving the patient into an imaging facility. In this paper, we report on in vivo validation studies: 1) comparing 3D US ventricle volumes before and after clinically necessary interventions removing CSF, and 2) comparing 3D US ventricle volumes to those from MRI. Post-intervention ventricle volumes were less than pre-intervention measurements for all patients and all interventions. We found high correlations (R = 0.97) between the difference in ventricle volume and the reported removed CSF with the slope not significantly different than 1 (p < 0.05). Comparisons between ventricle volumes from 3D US and MR images taken 4 (±3.8) days of each other did not show significant difference (p=0.44) between 3D US and MRI through paired t-test.

  2. Interactive 2D to 3D stereoscopic image synthesis

    Science.gov (United States)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  3. Large distance 3D imaging of hidden objects

    Science.gov (United States)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  4. Evaluation of Kinect 3D Sensor for Healthcare Imaging.

    Science.gov (United States)

    Pöhlmann, Stefanie T L; Harkness, Elaine F; Taylor, Christopher J; Astley, Susan M

    2016-01-01

    Microsoft Kinect is a three-dimensional (3D) sensor originally designed for gaming that has received growing interest as a cost-effective and safe device for healthcare imaging. Recent applications of Kinect in health monitoring, screening, rehabilitation, assistance systems, and intervention support are reviewed here. The suitability of available technologies for healthcare imaging applications is assessed. The performance of Kinect I, based on structured light technology, is compared with that of the more recent Kinect II, which uses time-of-flight measurement, under conditions relevant to healthcare applications. The accuracy, precision, and resolution of 3D images generated with Kinect I and Kinect II are evaluated using flat cardboard models representing different skin colors (pale, medium, and dark) at distances ranging from 0.5 to 1.2 m and measurement angles of up to 75°. Both sensors demonstrated high accuracy (majority of measurements Kinect I is capable of imaging at shorter measurement distances, but Kinect II enables structures angled at over 60° to be evaluated. Kinect II showed significantly higher precision and Kinect I showed significantly higher resolution (both p Kinect is not a medical imaging device, both sensor generations show performance adequate for a range of healthcare imaging applications. Kinect I is more appropriate for short-range imaging and Kinect II is more appropriate for imaging highly curved surfaces such as the face or breast.

  5. 3D Image Reconstruction from Compton camera data

    CERN Document Server

    Kuchment, Peter

    2016-01-01

    In this paper, we address analytically and numerically the inversion of the integral transform (\\emph{cone} or \\emph{Compton} transform) that maps a function on $\\mathbb{R}^3$ to its integrals over conical surfaces. It arises in a variety of imaging techniques, e.g. in astronomy, optical imaging, and homeland security imaging, especially when the so called Compton cameras are involved. Several inversion formulas are developed and implemented numerically in $3D$ (the much simpler $2D$ case was considered in a previous publication).

  6. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    Science.gov (United States)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  7. Combining Different Modalities for 3D Imaging of Biological Objects

    CERN Document Server

    Tsyganov, E; Kulkarni, P; Mason, R; Parkey, R; Seliuonine, S; Shay, J; Soesbe, T; Zhezher, V; Zinchenko, A I

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a $^{57}$Co source and $^{98m}$Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown in this paper, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. ...

  8. Application of Medical Imaging Software to 3D Visualization of Astronomical Data

    CERN Document Server

    Borkin, M; Halle, M; Alan, D; Borkin, Michelle; Goodman, Alyssa; Halle, Michael; Alan, Douglas

    2006-01-01

    The AstroMed project at Harvard University's Initiative in Innovative Computing (IIC) is working on improved visualization and data sharing solutions applicable to the fields of both astronomy and medicine. The current focus is on the application of medical imaging visualization and analysis techniques to three-dimensional astronomical data. The 3D Slicer and OsiriX medical imaging tools have been used to make isosurface and volumetric models in RA-DEC-velocity space of the Perseus star forming region from the COMPLETE Survey of Star Forming Region's spectral line maps. 3D Slicer, a brain imaging and visualization computer application developed at Brigham and Women's Hospital's Surgical Planning Lab, is capable of displaying volumes (i.e. data cubes), displaying slices in any direction through the volume, generating 3D isosurface models from the volume which can be viewed and rotated in 3D space, and making 3D models of label maps (for example CLUMPFIND output). OsiriX is able to generate volumetric models fr...

  9. Image Appraisal for 2D and 3D Electromagnetic Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Alumbaugh, D.L.; Newman, G.A.

    1999-01-28

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

  10. Optimal Point Spread Function Design for 3D Imaging

    Science.gov (United States)

    Shechtman, Yoav; Sahl, Steffen J.; Backer, Adam S.; Moerner, W. E.

    2015-01-01

    To extract from an image of a single nanoscale object maximum physical information about its position, we propose and demonstrate a framework for pupil-plane modulation for 3D imaging applications requiring precise localization, including single-particle tracking and super-resolution microscopy. The method is based on maximizing the information content of the system, by formulating and solving the appropriate optimization problem – finding the pupil-plane phase pattern that would yield a PSF with optimal Fisher information properties. We use our method to generate and experimentally demonstrate two example PSFs: one optimized for 3D localization precision over a 3 μm depth of field, and another with an unprecedented 5 μm depth of field, both designed to perform under physically common conditions of high background signals. PMID:25302889

  11. 3D reconstruction of concave surfaces using polarisation imaging

    Science.gov (United States)

    Sohaib, A.; Farooq, A. R.; Ahmed, J.; Smith, L. N.; Smith, M. L.

    2015-06-01

    This paper presents a novel algorithm for improved shape recovery using polarisation-based photometric stereo. The majority of previous research using photometric stereo involves 3D reconstruction using both the diffuse and specular components of light; however, this paper suggests the use of the specular component only as it is the only form of light that comes directly off the surface without subsurface scattering or interreflections. Experiments were carried out on both real and synthetic surfaces. Real images were obtained using a polarisation-based photometric stereo device while synthetic images were generated using PovRay® software. The results clearly demonstrate that the proposed method can extract three-dimensional (3D) surface information effectively even for concave surfaces with complex texture and surface reflectance.

  12. Volume Sculpting: Intuitive, Interactive 3D Shape Modelling

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas

    A system for interactive modelling of 3D shapes on a computer is presented. The system is intuitive and has a flat learning curve. It is especially well suited to the creation of organic shapes and shapes of complex topology. The interaction is simple; the user can either add new shape features...

  13. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    Science.gov (United States)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  14. Utilization of multiple frequencies in 3D nonlinear microwave imaging

    DEFF Research Database (Denmark)

    Jensen, Peter Damsgaard; Rubæk, Tonny; Mohr, Johan Jacob

    2012-01-01

    The use of multiple frequencies in a nonlinear microwave algorithm is considered. Using multiple frequencies allows for obtaining the improved resolution available at the higher frequencies while retaining the regularizing effects of the lower frequencies. However, a number of different challenges...... at lower frequencies are used as starting guesses for reconstructions at higher frequencies. The performance is illustrated using simulated 2-D data and data obtained with the 3-D DTU microwave imaging system....

  15. Joint calibration of 3D resist image and CDSEM

    Science.gov (United States)

    Chou, C. S.; He, Y. Y.; Tang, Y. P.; Chang, Y. T.; Huang, W. C.; Liu, R. G.; Gau, T. S.

    2013-04-01

    Traditionally, an optical proximity correction model is to evaluate the resist image at a specific depth within the photoresist and then extract the resist contours from the image. Calibration is generally implemented by comparing resist contours with the critical dimensions (CD). The wafer CD is usually collected by a scanning electron microscope (SEM), which evaluates the CD based on some criterion that is a function of gray level, differential signal, threshold or other parameters set by the SEM. However, the criterion does not reveal which depth the CD is obtained at. This depth inconsistency between modeling and SEM makes the model calibration difficult for low k1 images. In this paper, the vertical resist profile is obtained by modifying the model from planar (2D) to quasi-3D approach and comparing the CD from this new model with SEM CD. For this quasi-3D model, the photoresist diffusion along the depth of the resist is considered and the 3D photoresist contours are evaluated. The performance of this new model is studied and is better than the 2D model.

  16. Discrete Method of Images for 3D Radio Propagation Modeling

    Science.gov (United States)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  17. 3D thermal medical image visualization tool: Integration between MRI and thermographic images.

    Science.gov (United States)

    Abreu de Souza, Mauren; Chagas Paz, André Augusto; Sanches, Ionildo Jóse; Nohama, Percy; Gamba, Humberto Remigio

    2014-01-01

    Three-dimensional medical image reconstruction using different images modalities require registration techniques that are, in general, based on the stacking of 2D MRI/CT images slices. In this way, the integration of two different imaging modalities: anatomical (MRI/CT) and physiological information (infrared image), to generate a 3D thermal model, is a new methodology still under development. This paper presents a 3D THERMO interface that provides flexibility for the 3D visualization: it incorporates the DICOM parameters; different color scale palettes at the final 3D model; 3D visualization at different planes of sections; and a filtering option that provides better image visualization. To summarize, the 3D thermographc medical image visualization provides a realistic and precise medical tool. The merging of two different imaging modalities allows better quality and more fidelity, especially for medical applications in which the temperature changes are clinically significant.

  18. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    Science.gov (United States)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  19. Feature detection on 3D images of dental imprints

    Science.gov (United States)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  20. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Energy Technology Data Exchange (ETDEWEB)

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  1. 3D Lunar Terrain Reconstruction from Apollo Images

    Science.gov (United States)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  2. 3D mapping of cerebrospinal fluid local volume changes in patients with hydrocephalus treated by surgery: preliminary study

    Energy Technology Data Exchange (ETDEWEB)

    Hodel, Jerome [Hopital Roger Salengro, Department of Neuroradiology, Lille (France); Hopital Roger Salengro, Service de Neuroradiologie, Lille (France); Besson, Pierre; Pruvo, Jean-Pierre; Leclerc, Xavier [Hopital Roger Salengro, Department of Neuroradiology, Lille (France); Rahmouni, Alain; Grandjacques, Benedicte; Luciani, Alain [Hopital Henri Mondor, Department of Radiology, Creteil (France); Petit, Eric; Lebret, Alain [Signals Images and Intelligent Systems Laboratory, Creteil (France); Outteryck, Olivier [Hopital Roger Salengro, Department of Neurology, Lille (France); Benadjaoud, Mohamed Amine [Radiation Epidemiology Team, CESP, Centre for Research in Epidemiology and Population Health U1018, Villejuif (France); Maraval, Anne [Hopital Henri Mondor, Department of Neuroradiology, Creteil (France); Decq, Philippe [Hopital Henri Mondor, Department of Neurosurgery, Creteil (France)

    2014-01-15

    To develop automated deformation modelling for the assessment of cerebrospinal fluid (CSF) local volume changes in patients with hydrocephalus treated by surgery. Ventricular and subarachnoid CSF volume changes were mapped by calculating the Jacobian determinant of the deformation fields obtained after non-linear registration of pre- and postoperative images. A total of 31 consecutive patients, 15 with communicating hydrocephalus (CH) and 16 with non-communicating hydrocephalus (NCH), were investigated before and after surgery using a 3D SPACE (sampling perfection with application optimised contrast using different flip-angle evolution) sequence. Two readers assessed CSF volume changes using 3D colour-encoded maps. The Evans index and postoperative volume changes of the lateral ventricles and sylvian fissures were quantified and statistically compared. Before surgery, sylvian fissure and brain ventricle volume differed significantly between CH and NCH (P = 0.001 and P = 0.025, respectively). After surgery, 3D colour-encoded maps allowed for the visual recognition of the CSF volume changes in all patients. The amounts of ventricle volume loss of CH and NCH patients were not significantly different (P = 0.30), whereas readjustment of the sylvian fissure volume was conflicting in CH and NCH patients (P < 0.001). The Evans index correlated with ventricle volume in NCH patients. 3D mapping of CSF volume changes is feasible providing a quantitative follow-up of patients with hydrocephalus. (orig.)

  3. Evaluation of right ventricular volume and function by 2D and 3D echocardiography compared to MRI

    DEFF Research Database (Denmark)

    Kjaergaard, Jesper; Petersen, Claus Leth; Kjaer, Andreas;

    2005-01-01

    AIMS: Radionuclide techniques, and recently MRI, have been used for clinical evaluation of right ventricular (RV) volumes function (RVEF) and volumes; but with the introduction of 3D echocardiography, new echocardiographic possibilities for RV evaluation independent of geometrical assumptions have...... emerged. This study compared classic and new echocardiographic and radionuclide estimates, including gated blood pool single-photon emission computed tomography (SPECT) of RV size and function to RV volumes, and ejection fraction (RVEF) measured by magnetic resonance imaging (MRI). METHODS AND RESULTS...

  4. MUTUAL INFORMATION BASED 3D NON-RIGID REGISTRATION OF CT/MR ABDOMEN IMAGES

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A mutual information based 3D non-rigid registration approach was proposed for the registration of deformable CT/MR body abdomen images. The Parzen Windows Density Estimation (PWDE) method is adopted to calculate the mutual information between the two modals of CT and MRI abdomen images. By maximizing MI between the CT and MR volume images, the overlapping part of them reaches the biggest, which means that the two body images of CT and MR matches best to each other. Visible Human Project (VHP) Male abdomen CT and MRI Data are used as experimental data sets. The experimental results indicate that this approach of non-rigid 3D registration of CT/MR body abdominal images can be achieved effectively and automatically, without any prior processing procedures such as segmentation and feature extraction, but has a main drawback of very long computation time. Key words: medical image registration; multi-modality; mutual information; non-rigid; Parzen window density estimation

  5. 3D quantification of microclimate volume in layered clothing for the prediction of clothing insulation.

    Science.gov (United States)

    Lee, Yejin; Hong, Kyunghi; Hong, Sung-Ae

    2007-05-01

    Garment fit and resultant air volume is a crucial factor in thermal insulation, and yet, it has been difficult to quantify the air volume of clothing microclimate and relate it to the thermal insulation value just using the information on the size of clothing pattern without actual 3D volume measurement in wear condition. As earlier methods for the computation of air volume in clothing microclimate, vacuum over suit and circumference model have been used. However, these methods have inevitable disadvantages in terms of cost or accuracy due to the limitations of measurement equipment. In this paper, the phase-shifting moiré topography was introduced as one of the 3D scanning tools to measure the air volume of clothing microclimate quantitatively. The purpose of this research is to adopt a non-contact image scanning technology, phase-shifting moiré topography, to ascertain relationship between air volume and insulation value of layered clothing systems in wear situations where the 2D fabric creates new conditions in 3D spaces. The insulation of vests over shirts as a layered clothing system was measured with a thermal manikin in the environmental condition of 20 degrees C, 65% RH and air velocity of 0.79 m/s. As the pattern size increased, the insulation of the clothing system was increased. But beyond a certain limit, the insulation started to decrease due to convection and ventilation, which is more apparent when only the vest was worn over the torso of manikin. The relationship between clothing air volume and insulation was difficult to predict with a single vest due to the extreme openings which induced active ventilation. But when the vest was worn over the shirt, the effects of thickness of the fabrics on insulation were less pronounced compared with that of air volume. In conclusion, phase-shifting moiré topography was one of the efficient and accurate ways of quantifying air volume and its distribution across the clothing microclimate. It is also noted

  6. UNDERWATER 3D MODELING: IMAGE ENHANCEMENT AND POINT CLOUD FILTERING

    Directory of Open Access Journals (Sweden)

    I. Sarakinou

    2016-06-01

    Full Text Available This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images’ radiometry (captured at shallow depths and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software. Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck captured at three different depths (3.5m, 10m and 14m respectively. Four models have been created from the first dataset (seafloor in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a the definition of parameters for the point cloud filtering and the creation of a reference model, b the radiometric editing of images, followed by the creation of three improved models and c the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m and different objects (part of a wreck and a small boat's wreck in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  7. JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases

    Directory of Open Access Journals (Sweden)

    Scott Mark

    2005-03-01

    Full Text Available Abstract Background Many three-dimensional (3D images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. Results We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. Conclusion We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily.

  8. Digital holographic microscopy for imaging growth and treatment response in 3D tumor models

    Science.gov (United States)

    Li, Yuyu; Petrovic, Ljubica; Celli, Jonathan P.; Yelleswarapu, Chandra S.

    2014-03-01

    While three-dimensional tumor models have emerged as valuable tools in cancer research, the ability to longitudinally visualize the 3D tumor architecture restored by these systems is limited with microscopy techniques that provide only qualitative insight into sample depth, or which require terminal fixation for depth-resolved 3D imaging. Here we report the use of digital holographic microscopy (DHM) as a viable microscopy approach for quantitative, non-destructive longitudinal imaging of in vitro 3D tumor models. Following established methods we prepared 3D cultures of pancreatic cancer cells in overlay geometry on extracellular matrix beds and obtained digital holograms at multiple timepoints throughout the duration of growth. The holograms were digitally processed and the unwrapped phase images were obtained to quantify nodule thickness over time under normal growth, and in cultures subject to chemotherapy treatment. In this manner total nodule volumes are rapidly estimated and demonstrated here to show contrasting time dependent changes during growth and in response to treatment. This work suggests the utility of DHM to quantify changes in 3D structure over time and suggests the further development of this approach for time-lapse monitoring of 3D morphological changes during growth and in response to treatment that would otherwise be impractical to visualize.

  9. 3D texture analysis in renal cell carcinoma tissue image grading.

    Science.gov (United States)

    Kim, Tae-Yun; Cho, Nam-Hoon; Jeong, Goo-Bo; Bengtsson, Ewert; Choi, Heung-Kook

    2014-01-01

    One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system.

  10. 3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading

    Directory of Open Access Journals (Sweden)

    Tae-Yun Kim

    2014-01-01

    Full Text Available One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system.

  11. 3D IMAGING OF INDIVIDUAL PARTICLES: A REVIEW

    Directory of Open Access Journals (Sweden)

    Eric Pirard

    2012-06-01

    Full Text Available In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear distinction between the surfometric and volumetric principles. The literature review is as broad as possible covering materials science as well as biology while keeping an eye on emerging technologies in optics and physics. The paper should be of interest to any scientist trying to picture particles in 3D with the best possible resolution for accurate size and shape estimation. Though techniques are adequate for nanoscopic and microscopic particles, no special size limit has been considered while compiling the review.

  12. Development of 3D microwave imaging reflectometry in LHD (invited).

    Science.gov (United States)

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO.

  13. Low cost 3D scanning process using digital image processing

    Science.gov (United States)

    Aguilar, David; Romero, Carlos; Martínez, Fernando

    2017-02-01

    This paper shows the design and building of a low cost 3D scanner, able to digitize solid objects through contactless data acquisition, using active object reflection. 3D scanners are used in different applications such as: science, engineering, entertainment, etc; these are classified in: contact scanners and contactless ones, where the last ones are often the most used but they are expensive. This low-cost prototype is done through a vertical scanning of the object using a fixed camera and a mobile horizontal laser light, which is deformed depending on the 3-dimensional surface of the solid. Using digital image processing an analysis of the deformation detected by the camera was done; it allows determining the 3D coordinates using triangulation. The obtained information is processed by a Matlab script, which gives to the user a point cloud corresponding to each horizontal scanning done. The obtained results show an acceptable quality and significant details of digitalized objects, making this prototype (built on LEGO Mindstorms NXT kit) a versatile and cheap tool, which can be used for many applications, mainly by engineering students.

  14. Physically based analysis of deformations in 3D images

    Science.gov (United States)

    Nastar, Chahab; Ayache, Nicholas

    1993-06-01

    We present a physically based deformable model which can be used to track and to analyze the non-rigid motion of dynamic structures in time sequences of 2-D or 3-D medical images. The model considers an object undergoing an elastic deformation as a set of masses linked by springs, where the natural lengths of the springs is set equal to zero, and is replaced by a set of constant equilibrium forces, which characterize the shape of the elastic structure in the absence of external forces. This model has the extremely nice property of yielding dynamic equations which are linear and decoupled for each coordinate, whatever the amplitude of the deformation. It provides a reduced algorithmic complexity, and a sound framework for modal analysis, which allows a compact representation of a general deformation by a reduced number of parameters. The power of the approach to segment, track, and analyze 2-D and 3-D images is demonstrated by a set of experimental results on various complex medical images.

  15. Image-Based 3D Face Modeling System

    Directory of Open Access Journals (Sweden)

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  16. 3D CT modeling of hepatic vessel architecture and volume calculation in living donated liver transplantation

    Energy Technology Data Exchange (ETDEWEB)

    Frericks, Bernd B. [Medizinische Hochschule Hannover, Diagnostische Radiologie, Hannover (Germany); Klinik und Poliklinik fuer Radiologie und Nuklearmedizin, Universitaetsklinikum Benjamin Franklin, Freie Universitaet Berlin, Hindenburgdamm 30, 12200, Berlin (Germany); Caldarone, Franco C.; Savellano, Dagmar Hoegemann; Stamm, Georg; Kirchhoff, Timm D.; Shin, Hoen-Oh; Galanski, Michael [Medizinische Hochschule Hannover, Diagnostische Radiologie, Hannover (Germany); Nashan, Bjoern; Klempnauer, Juergen [Medizinische Hochschule Hannover, Viszeral und Transplantationschirurgie, Hannover (Germany); Schenk, Andrea; Selle, Dirk; Spindler, Wolf; Peitgen, Heinz-Otto [Centrum fuer Medizinische Diagnosesysteme und Visualisierung, Bremen (Germany)

    2004-02-01

    The aim of this study was to evaluate a software tool for non-invasive preoperative volumetric assessment of potential donors in living donated liver transplantation (LDLT). Biphasic helical CT was performed in 56 potential donors. Data sets were post-processed using a non-commercial software tool for segmentation, volumetric analysis and visualisation of liver segments. Semi-automatic definition of liver margins allowed the segmentation of parenchyma. Hepatic vessels were delineated using a region-growing algorithm with automatically determined thresholds. Volumes and shapes of liver segments were calculated automatically based on individual portal-venous branches. Results were visualised three-dimensionally and statistically compared with conventional volumetry and the intraoperative findings in 27 transplanted cases. Image processing was easy to perform within 23 min. Of the 56 potential donors, 27 were excluded from LDLT because of inappropriate liver parenchyma or vascular architecture. Two recipients were not transplanted due to poor clinical conditions. In the 27 transplanted cases, preoperatively visualised vessels were confirmed, and only one undetected accessory hepatic vein was revealed. Calculated graft volumes were 1110{+-}180 ml for right lobes, 820 ml for the left lobe and 270{+-}30 ml for segments II+III. The calculated volumes and intraoperatively measured graft volumes correlated significantly. No significant differences between the presented automatic volumetry and the conventional volumetry were observed. A novel image processing technique was evaluated which allows a semi-automatic volume calculation and 3D visualisation of the different liver segments. (orig.)

  17. 3D imaging of neutron tracks using confocal microscopy

    Science.gov (United States)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  18. Extracting 3D layout from a single image using global image structures.

    Science.gov (United States)

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation.

  19. 3D imaging by serial block face scanning electron microscopy for materials science using ultramicrotomy.

    Science.gov (United States)

    Hashimoto, Teruo; Thompson, George E; Zhou, Xiaorong; Withers, Philip J

    2016-04-01

    Mechanical serial block face scanning electron microscopy (SBFSEM) has emerged as a means of obtaining three dimensional (3D) electron images over volumes much larger than possible by focused ion beam (FIB) serial sectioning and at higher spatial resolution than achievable with conventional X-ray computed tomography (CT). Such high resolution 3D electron images can be employed for precisely determining the shape, volume fraction, distribution and connectivity of important microstructural features. While soft (fixed or frozen) biological samples are particularly well suited for nanoscale sectioning using an ultramicrotome, the technique can also produce excellent 3D images at electron microscope resolution in a time and resource-efficient manner for engineering materials. Currently, a lack of appreciation of the capabilities of ultramicrotomy and the operational challenges associated with minimising artefacts for different materials is limiting its wider application to engineering materials. Consequently, this paper outlines the current state of the art for SBFSEM examining in detail how damage is introduced during slicing and highlighting strategies for minimising such damage. A particular focus of the study is the acquisition of 3D images for a variety of metallic and coated systems.

  20. A 3D image filter for parameter-free segmentation of macromolecular structures from electron tomograms.

    Directory of Open Access Journals (Sweden)

    Rubbiya A Ali

    Full Text Available 3D image reconstruction of large cellular volumes by electron tomography (ET at high (≤ 5 nm resolution can now routinely resolve organellar and compartmental membrane structures, protein coats, cytoskeletal filaments, and macromolecules. However, current image analysis methods for identifying in situ macromolecular structures within the crowded 3D ultrastructural landscape of a cell remain labor-intensive, time-consuming, and prone to user-bias and/or error. This paper demonstrates the development and application of a parameter-free, 3D implementation of the bilateral edge-detection (BLE algorithm for the rapid and accurate segmentation of cellular tomograms. The performance of the 3D BLE filter has been tested on a range of synthetic and real biological data sets and validated against current leading filters-the pseudo 3D recursive and Canny filters. The performance of the 3D BLE filter was found to be comparable to or better than that of both the 3D recursive and Canny filters while offering the significant advantage that it requires no parameter input or optimisation. Edge widths as little as 2 pixels are reproducibly detected with signal intensity and grey scale values as low as 0.72% above the mean of the background noise. The 3D BLE thus provides an efficient method for the automated segmentation of complex cellular structures across multiple scales for further downstream processing, such as cellular annotation and sub-tomogram averaging, and provides a valuable tool for the accurate and high-throughput identification and annotation of 3D structural complexity at the subcellular level, as well as for mapping the spatial and temporal rearrangement of macromolecular assemblies in situ within cellular tomograms.

  1. A 3D image filter for parameter-free segmentation of macromolecular structures from electron tomograms.

    Science.gov (United States)

    Ali, Rubbiya A; Landsberg, Michael J; Knauth, Emily; Morgan, Garry P; Marsh, Brad J; Hankamer, Ben

    2012-01-01

    3D image reconstruction of large cellular volumes by electron tomography (ET) at high (≤ 5 nm) resolution can now routinely resolve organellar and compartmental membrane structures, protein coats, cytoskeletal filaments, and macromolecules. However, current image analysis methods for identifying in situ macromolecular structures within the crowded 3D ultrastructural landscape of a cell remain labor-intensive, time-consuming, and prone to user-bias and/or error. This paper demonstrates the development and application of a parameter-free, 3D implementation of the bilateral edge-detection (BLE) algorithm for the rapid and accurate segmentation of cellular tomograms. The performance of the 3D BLE filter has been tested on a range of synthetic and real biological data sets and validated against current leading filters-the pseudo 3D recursive and Canny filters. The performance of the 3D BLE filter was found to be comparable to or better than that of both the 3D recursive and Canny filters while offering the significant advantage that it requires no parameter input or optimisation. Edge widths as little as 2 pixels are reproducibly detected with signal intensity and grey scale values as low as 0.72% above the mean of the background noise. The 3D BLE thus provides an efficient method for the automated segmentation of complex cellular structures across multiple scales for further downstream processing, such as cellular annotation and sub-tomogram averaging, and provides a valuable tool for the accurate and high-throughput identification and annotation of 3D structural complexity at the subcellular level, as well as for mapping the spatial and temporal rearrangement of macromolecular assemblies in situ within cellular tomograms.

  2. Method for 3D Image Representation with Reducing the Number of Frames based on Characteristics of Human Eyes

    Directory of Open Access Journals (Sweden)

    Kohei Arai

    2016-10-01

    Full Text Available Method for 3D image representation with reducing the number of frames based on characteristics of human eyes is proposed together with representation of 3D depth by changing the pixel transparency. Through experiments, it is found that the proposed method allows reduction of the number of frames by the factor of 1/6. Also, it can represent the 3D depth through visual perceptions. Thus, real time volume rendering can be done with the proposed method.

  3. 3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques

    Science.gov (United States)

    Abate, D.; Menna, F.; Remondino, F.; Gattari, M. G.

    2014-06-01

    The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc.), as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc.), can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy). The goal of the project is the multi-temporal 3D documentation and monitoring of paintings - at the moment in bad conservation's situation - and the provision of some metrics to quantify the deformations and damages.

  4. Large deformation diffeomorphic metric mapping registration of reconstructed 3D histological section images and in vivo MR images

    Directory of Open Access Journals (Sweden)

    Can Ceritoglu

    2010-05-01

    Full Text Available Our current understanding of neuroanatomical abnormalities in neuropsychiatric diseases is based largely on magnetic resonance imaging (MRI and post mortem histological analyses of the brain. Further advances in elucidating altered brain structure in these human conditions might emerge from combining MRI and histological methods. We propose a multistage method for registering 3D volumes reconstructed from histological sections to corresponding in vivo MRI volumes from the same subjects: (1 manual segmentation of white matter (WM, gray matter (GM and cerebrospinal fluid (CSF compartments in histological sections, (2 alignment of consecutive histological sections using 2D rigid transformation to construct a 3D histological image volume from the aligned sections, (3 registration of reconstructed 3D histological volumes to the corresponding 3D MRI volumes using 3D affine transformation, (4 intensity normalization of images via histogram matching and (5 registration of the volumes via intensity based Large Deformation Diffeomorphic Metric (LDDMM image matching algorithm. Here we demonstrate the utility of our method in the transfer of cytoarchitectonic information from histological sections to identify regions of interest in MRI scans of nine adult macaque brains for morphometric analyses. LDDMM improved the accuracy of the registration via decreased distances between GM/CSF surfaces after LDDMM (0.39±0.13 mm compared to distances after affine registration (0.76±0.41 mm. Similarly, WM/GM distances decreased to 0.28±0.16 mm after LDDMM compared to 0.54±0.39 mm after affine registration. The multistage registration method may find broad application for mapping histologically based information, e.g., receptor distributions, gene expression, onto MRI volumes.

  5. Experiments on terahertz 3D scanning microscopic imaging

    Science.gov (United States)

    Zhou, Yi; Li, Qi

    2016-10-01

    Compared with the visible light and infrared, terahertz (THz) radiation can penetrate nonpolar and nonmetallic materials. There are many studies on the THz coaxial transmission confocal microscopy currently. But few researches on the THz dual-axis reflective confocal microscopy were reported. In this paper, we utilized a dual-axis reflective confocal scanning microscope working at 2.52 THz. In contrast with the THz coaxial transmission confocal microscope, the microscope adopted in this paper can attain higher axial resolution at the expense of reduced lateral resolution, revealing more satisfying 3D imaging capability. Objects such as Chinese characters "Zhong-Hua" written in paper with a pencil and a combined sheet metal which has three layers were scanned. The experimental results indicate that the system can extract two Chinese characters "Zhong," "Hua" or three layers of the combined sheet metal. It can be predicted that the microscope can be applied to biology, medicine and other fields in the future due to its favorable 3D imaging capability.

  6. High Resolution 3D Radar Imaging of Comet Interiors

    Science.gov (United States)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D

  7. 3-D reconstruction of neurons from multichannel confocal laser scanning image series.

    Science.gov (United States)

    Wouterlood, Floris G

    2014-04-10

    A confocal laser scanning microscope (CLSM) collects information from a thin, focal plane and ignores out-of-focus information. Scanning of a specimen, with stepwise axial (Z-) movement of the stage in between each scan, produces Z-series of confocal images of a tissue volume, which then can be used to 3-D reconstruct structures of interest. The operator first configures separate channels (e.g., laser, filters, and detector settings) for each applied fluorochrome and then acquires Z-series of confocal images: one series per channel. Channel signal separation is extremely important. Measures to avoid bleaching are vital. Post-acquisition deconvolution of the image series is often performed to increase resolution before 3-D reconstruction takes place. In the 3-D reconstruction programs described in this unit, reconstructions can be inspected in real time from any viewing angle. By altering viewing angles and by switching channels off and on, the spatial relationships of 3-D-reconstructed structures with respect to structures visualized in other channels can be studied. Since each brand of CLSM, computer program, and 3-D reconstruction package has its own proprietary set of procedures, a general approach is provided in this protocol wherever possible.

  8. Extended volume and surface scatterometer for optical characterization of 3D-printed elements

    Science.gov (United States)

    Dannenberg, Florian; Uebeler, Denise; Weiß, Jürgen; Pescoller, Lukas; Weyer, Cornelia; Hahlweg, Cornelius

    2015-09-01

    The use of 3d printing technology seems to be a promising way for low cost prototyping, not only of mechanical, but also of optical components or systems. It is especially useful in applications where customized equipment repeatedly is subject to immediate destruction, as in experimental detonics and the like. Due to the nature of the 3D-printing process, there is a certain inner texture and therefore inhomogeneous optical behaviour to be taken into account, which also indicates mechanical anisotropy. Recent investigations are dedicated to quantification of optical properties of such printed bodies and derivation of corresponding optimization strategies for the printing process. Beside mounting, alignment and illumination means, also refractive and reflective elements are subject to investigation. The proposed measurement methods are based on an imaging nearfield scatterometer for combined volume and surface scatter measurements as proposed in previous papers. In continuation of last year's paper on the use of near field imaging, which basically is a reflective shadowgraph method, for characterization of glossy surfaces like printed matter or laminated material, further developments are discussed. The device has been extended for observation of photoelasticity effects and therefore homogeneity of polarization behaviour. A refined experimental set-up is introduced. Variation of plane of focus and incident angle are used for separation of various the images of the layers of the surface under test, cross and parallel polarization techniques are applied. Practical examples from current research studies are included.

  9. Characterization of neonatal patients with intraventricular hemorrhage using 3D ultrasound cerebral ventricle volumes

    Science.gov (United States)

    Kishimoto, Jessica; Fenster, Aaron; Lee, David S. C.; de Ribaupierre, Sandrine

    2015-03-01

    One of the major non-congenital cause of neurological impairment among neonates born very preterm is intraventricular hemorrhage (IVH) - bleeding within the lateral ventricles. Most IVH patients will have a transient period of ventricle dilation that resolves spontaneously. However, those patients most at risk of long-term impairment are those who have progressive ventricle dilation as this causes macrocephaly, an abnormally enlarged head, then later causes increases intracranial pressure (ICP). 2D ultrasound (US) images through the fontanelles of the patients are serially acquired to monitor the progression of the ventricle dilation. These images are used to determine when interventional therapies such as needle aspiration of the built up CSF might be indicated for a patient. Initial therapies usually begin during the third week of life. Such interventions have been shown to decrease morbidity and mortality in IVH patients; however, this comes with risks of further hemorrhage or infection; therefore only patients requiring it should be treated. Previously we have developed and validated a 3D US system to monitor the progression of ventricle volumes (VV) in IVH patients. This system has been validated using phantoms and a small set of patient images. The aim of this work is to determine the ability of 3D US generated VV to categorize patients into those who will require interventional therapies, and those who will have spontaneous resolution. Patients with higher risks could therefore be monitored better, by re-allocating some of the resources as the low risks infants would need less monitoring.

  10. GPU-Based Block-Wise Nonlocal Means Denoising for 3D Ultrasound Images

    Directory of Open Access Journals (Sweden)

    Liu Li

    2013-01-01

    Full Text Available Speckle suppression plays an important role in improving ultrasound (US image quality. While lots of algorithms have been proposed for 2D US image denoising with remarkable filtering quality, there is relatively less work done on 3D ultrasound speckle suppression, where the whole volume data rather than just one frame needs to be considered. Then, the most crucial problem with 3D US denoising is that the computational complexity increases tremendously. The nonlocal means (NLM provides an effective method for speckle suppression in US images. In this paper, a programmable graphic-processor-unit- (GPU- based fast NLM filter is proposed for 3D ultrasound speckle reduction. A Gamma distribution noise model, which is able to reliably capture image statistics for Log-compressed ultrasound images, was used for the 3D block-wise NLM filter on basis of Bayesian framework. The most significant aspect of our method was the adopting of powerful data-parallel computing capability of GPU to improve the overall efficiency. Experimental results demonstrate that the proposed method can enormously accelerate the algorithm.

  11. 2D-3D image registration in diagnostic and interventional X-Ray imaging

    NARCIS (Netherlands)

    Bom, I.M.J. van der

    2010-01-01

    Clinical procedures that are conventionally guided by 2D x-ray imaging, may benefit from the additional spatial information provided by 3D image data. For instance, guidance of minimally invasive procedures with CT or MRI data provides 3D spatial information and visualization of structures that are

  12. Research of Fast 3D Imaging Based on Multiple Mode

    Science.gov (United States)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  13. 3D imaging and wavefront sensing with a plenoptic objective

    Science.gov (United States)

    Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.

    2011-06-01

    Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.

  14. Passive markers for tracking surgical instruments in real-time 3-D ultrasound imaging.

    Science.gov (United States)

    Stoll, Jeffrey; Ren, Hongliang; Dupont, Pierre E

    2012-03-01

    A family of passive echogenic markers is presented by which the position and orientation of a surgical instrument can be determined in a 3-D ultrasound volume, using simple image processing. Markers are attached near the distal end of the instrument so that they appear in the ultrasound volume along with the instrument tip. They are detected and measured within the ultrasound image, thus requiring no external tracking device. This approach facilitates imaging instruments and tissue simultaneously in ultrasound-guided interventions. Marker-based estimates of instrument pose can be used in augmented reality displays or for image-based servoing. Design principles for marker shapes are presented that ensure imaging system and measurement uniqueness constraints are met. An error analysis is included that can be used to guide marker design and which also establishes a lower bound on measurement uncertainty. Finally, examples of marker measurement and tracking algorithms are presented along with experimental validation of the concepts.

  15. Online reconstruction of 3D magnetic particle imaging data

    Science.gov (United States)

    Knopp, T.; Hofmann, M.

    2016-06-01

    Magnetic particle imaging is a quantitative functional imaging technique that allows imaging of the spatial distribution of super-paramagnetic iron oxide particles at high temporal resolution. The raw data acquisition can be performed at frame rates of more than 40 volumes s-1. However, to date image reconstruction is performed in an offline step and thus no direct feedback is available during the experiment. Considering potential interventional applications such direct feedback would be mandatory. In this work, an online reconstruction framework is implemented that allows direct visualization of the particle distribution on the screen of the acquisition computer with a latency of about 2 s. The reconstruction process is adaptive and performs block-averaging in order to optimize the signal quality for a given amount of reconstruction time.

  16. 3-D ultrasonic strain imaging based on a linear scanning system.

    Science.gov (United States)

    Huang, Qinghua; Xie, Bo; Ye, Pengfei; Chen, Zhaohong

    2015-02-01

    This paper introduces a 3-D strain imaging method based on a freehand linear scanning mode. We designed a linear sliding track with a position sensor and a height-adjustable holder to constrain the movement of an ultrasound probe in a freehand manner. When moving the probe along the sliding track, the corresponding positional measures for the probe are transmitted via a wireless communication module based on Bluetooth in real time. In a single examination, the probe is scanned in two sweeps in which the height of the probe is adjusted by the holder to collect the pre- and postcompression radio-frequency echoes, respectively. To generate a 3-D strain image, a volume cubic in which the voxels denote relative strains for tissues is defined according to the range of the two sweeps. With respect to the post-compression frames, several slices in the volume are determined and the pre-compression frames are re-sampled to precisely correspond to the post-compression frames. Thereby, a strain estimation method based on minimizing a cost function using dynamic programming is used to obtain the 2-D strain image for each pair of frames from the re-sampled pre-compression sweep and the post-compression sweep, respectively. A software system is developed for volume reconstruction, visualization, and measurement of the 3-D strain images. The experimental results show that high-quality 3-D strain images of phantom and human tissues can be generated by the proposed method, indicating that the proposed system can be applied for real clinical applications (e.g., musculoskeletal assessments).

  17. 3D imaging of semiconductor components by discrete laminography

    Energy Technology Data Exchange (ETDEWEB)

    Batenburg, K. J. [Centrum Wiskunde and Informatica, P.O. Box 94079, NL-1090 GB Amsterdam, The Netherlands and iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, W. J.; Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  18. Piecewise-diffeomorphic image registration: application to the motion estimation between 3D CT lung images with sliding conditions.

    Science.gov (United States)

    Risser, Laurent; Vialard, François-Xavier; Baluwala, Habib Y; Schnabel, Julia A

    2013-02-01

    In this paper, we propose a new strategy for modelling sliding conditions when registering 3D images in a piecewise-diffeomorphic framework. More specifically, our main contribution is the development of a mathematical formalism to perform Large Deformation Diffeomorphic Metric Mapping registration with sliding conditions. We also show how to adapt this formalism to the LogDemons diffeomorphic registration framework. We finally show how to apply this strategy to estimate the respiratory motion between 3D CT pulmonary images. Quantitative tests are performed on 2D and 3D synthetic images, as well as on real 3D lung images from the MICCAI EMPIRE10 challenge. Results show that our strategy estimates accurate mappings of entire 3D thoracic image volumes that exhibit a sliding motion, as opposed to conventional registration methods which are not capable of capturing discontinuous deformations at the thoracic cage boundary. They also show that although the deformations are not smooth across the location of sliding conditions, they are almost always invertible in the whole image domain. This would be helpful for radiotherapy planning and delivery.

  19. Needle placement for piriformis injection using 3-D imaging.

    Science.gov (United States)

    Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

    2013-01-01

    Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure.

  20. Spectral ladar: towards active 3D multispectral imaging

    Science.gov (United States)

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  1. GPU-accelerated denoising of 3D magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  2. 3D/2D Registration of Mapping Catheter Images for Arrhythmia Interventional Assistance

    CERN Document Server

    Fallavollita, Pascal

    2009-01-01

    Radiofrequency (RF) catheter ablation has transformed treatment for tachyarrhythmias and has become first-line therapy for some tachycardias. The precise localization of the arrhythmogenic site and the positioning of the RF catheter over that site are problematic: they can impair the efficiency of the procedure and are time consuming (several hours). Electroanatomic mapping technologies are available that enable the display of the cardiac chambers and the relative position of ablation lesions. However, these are expensive and use custom-made catheters. The proposed methodology makes use of standard catheters and inexpensive technology in order to create a 3D volume of the heart chamber affected by the arrhythmia. Further, we propose a novel method that uses a priori 3D information of the mapping catheter in order to estimate the 3D locations of multiple electrodes across single view C-arm images. The monoplane algorithm is tested for feasibility on computer simulations and initial canine data.

  3. 3D/2D Registration of Mapping Catheter Images for Arrhythmia Interventional Assistance

    Directory of Open Access Journals (Sweden)

    Pascal Fallavollita

    2009-09-01

    Full Text Available Radiofrequency (RF catheter ablation has transformed treatment for tachyarrhythmias and has become first-line therapy for some tachycardias. The precise localization of the arrhythmogenic site and the positioning of the RF catheter over that site are problematic: they can impair the efficiency of the procedure and are time consuming (several hours. Electroanatomic mapping technologies are available that enable the display of the cardiac chambers and the relative position of ablation lesions. However, these are expensive and use custom-made catheters. The proposed methodology makes use of standard catheters and inexpensive technology in order to create a 3D volume of the heart chamber affected by the arrhythmia. Further, we propose a novel method that uses a priori 3D information of the mapping catheter in order to estimate the 3D locations of multiple electrodes across single view C-arm images. The monoplane algorithm is tested for feasibility on computer simulations and initial canine data.

  4. 3D Imaging of Nanoparticle Distribution in Biological Tissue by Laser-Induced Breakdown Spectroscopy

    Science.gov (United States)

    Gimenez, Y.; Busser, B.; Trichard, F.; Kulesza, A.; Laurent, J. M.; Zaun, V.; Lux, F.; Benoit, J. M.; Panczer, G.; Dugourd, P.; Tillement, O.; Pelascini, F.; Sancey, L.; Motto-Ros, V.

    2016-07-01

    Nanomaterials represent a rapidly expanding area of research with huge potential for future medical applications. Nanotechnology indeed promises to revolutionize diagnostics, drug delivery, gene therapy, and many other areas of research. For any biological investigation involving nanomaterials, it is crucial to study the behavior of such nano-objects within tissues to evaluate both their efficacy and their toxicity. Here, we provide the first account of 3D label-free nanoparticle imaging at the entire-organ scale. The technology used is known as laser-induced breakdown spectroscopy (LIBS) and possesses several advantages such as speed of operation, ease of use and full compatibility with optical microscopy. We then used two different but complementary approaches to achieve 3D elemental imaging with LIBS: a volume reconstruction of a sliced organ and in-depth analysis. This proof-of-concept study demonstrates the quantitative imaging of both endogenous and exogenous elements within entire organs and paves the way for innumerable applications.

  5. Total 3D imaging of phase objects using defocusing microscopy: application to red blood cells

    CERN Document Server

    Roma, P M S; Amaral, F T; Agero, U; Mesquita, O N

    2014-01-01

    We present Defocusing Microscopy (DM), a bright-field optical microscopy technique able to perform total 3D imaging of transparent objects. By total 3D imaging we mean the determination of the actual shapes of the upper and lower surfaces of a phase object. We propose a new methodology using DM and apply it to red blood cells subject to different osmolality conditions: hypotonic, isotonic and hypertonic solutions. For each situation the shape of the upper and lower cell surface-membranes (lipid bilayer/cytoskeleton) are completely recovered, displaying the deformation of RBCs surfaces due to adhesion on the glass-substrate. The axial resolution of our technique allowed us to image surface-membranes separated by distances as small as 300 nm. Finally, we determine volume, superficial area, sphericity index and RBCs refractive index for each osmotic condition.

  6. High resolution 3D imaging of synchrotron generated microbeams

    Energy Technology Data Exchange (ETDEWEB)

    Gagliardi, Frank M., E-mail: frank.gagliardi@wbrc.org.au [Alfred Health Radiation Oncology, The Alfred, Melbourne, Victoria 3004, Australia and School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia); Cornelius, Iwan [Imaging and Medical Beamline, Australian Synchrotron, Clayton, Victoria 3168, Australia and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales 2500 (Australia); Blencowe, Anton [Division of Health Sciences, School of Pharmacy and Medical Sciences, The University of South Australia, Adelaide, South Australia 5000, Australia and Division of Information Technology, Engineering and the Environment, Mawson Institute, University of South Australia, Mawson Lakes, South Australia 5095 (Australia); Franich, Rick D. [School of Applied Sciences and Health Innovations Research Institute, RMIT University, Melbourne, Victoria 3000 (Australia); Geso, Moshi [School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia)

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  7. Hybrid Method for 3D Segmentation of Magnetic Resonance Images

    Institute of Scientific and Technical Information of China (English)

    ZHANGXiang; ZHANGDazhi; TIANJinwen; LIUJian

    2003-01-01

    Segmentation of some complex images, especially in magnetic resonance brain images, is often difficult to perform satisfactory results using only single approach of image segmentation. An approach towards the integration of several techniques seems to be the best solution. In this paper a new hybrid method for 3-dimension segmentation of the whole brain is introduced, based on fuzzy region growing, edge detection and mathematical morphology, The gray-level threshold, controlling the process of region growing, is determined by fuzzy technique. The image gradient feature is obtained by the 3-dimension sobel operator considering a 3×3×3 data block with the voxel to be evaluated at the center, while the gradient magnitude threshold is defined by the gradient magnitude histogram of brain magnetic resonance volume. By the combined methods of edge detection and region growing, the white matter volume of human brain is segmented perfectly. By the post-processing using mathematical morphological techniques, the whole brain region is obtained. In order to investigate the validity of the hybrid method, two comparative experiments, the region growing method using only gray-level feature and the thresholding method by combining gray-level and gradient features, are carried out. Experimental results indicate that the proposed method provides much better results than the traditional method using a single technique in the 3-dimension segmentation of human brain magnetic resonance data sets.

  8. Development of a Wireless and Near Real-Time 3D Ultrasound Strain Imaging System.

    Science.gov (United States)

    Chen, Zhaohong; Chen, Yongdong; Huang, Qinghua

    2016-04-01

    Ultrasound elastography is an important medical imaging tool for characterization of lesions. In this paper, we present a wireless and near real-time 3D ultrasound strain imaging system. It uses a 3D translating device to control a commercial linear ultrasound transducer to collect pre-compression and post-compression radio-frequency (RF) echo signal frames. The RF frames are wirelessly transferred to a high-performance server via a local area network (LAN). A dynamic programming strain estimation algorithm is implemented with the compute unified device architecture (CUDA) on the graphic processing unit (GPU) in the server to calculate the strain image after receiving a pre-compression RF frame and a post-compression RF frame at the same position. Each strain image is inserted into a strain volume which can be rendered in near real-time. We take full advantage of the translating device to precisely control the probe movement and compression. The GPU-based parallel computing techniques are designed to reduce the computation time. Phantom and in vivo experimental results demonstrate that our system can generate strain volumes with good quality and display an incrementally reconstructed volume image in near real-time.

  9. [Rapid 2D-3D medical image registration based on CUDA].

    Science.gov (United States)

    Li, Lingzhi; Zou, Beiji

    2014-08-01

    The medical image registration between preoperative three-dimensional (3D) scan data and intraoperative two-dimensional (2D) image is a key technology in the surgical navigation. Most previous methods need to generate 2D digitally reconstructed radiographs (DRR) images from the 3D scan volume data, then use conventional image similarity function for comparison. This procedure includes a large amount of calculation and is difficult to archive real-time processing. In this paper, with using geometric feature and image density mixed characteristics, we proposed a new similarity measure function for fast 2D-3D registration of preoperative CT and intraoperative X-ray images. This algorithm is easy to implement, and the calculation process is very short, while the resulting registration accuracy can meet the clinical use. In addition, the entire calculation process is very suitable for highly parallel numerical calculation by using the algorithm based on CUDA hardware acceleration to satisfy the requirement of real-time application in surgery.

  10. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    Energy Technology Data Exchange (ETDEWEB)

    Wong, S.T.C. [Univ. of California, San Francisco, CA (United States)

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  11. Respiratory influence on left atrial volume calculation with 3D-echocardiography

    DEFF Research Database (Denmark)

    Sørgaard, Mathias; Linde, Jesper J; Ismail, Hafsa;

    2016-01-01

    BACKGROUND: Left atrial volume (LAV) estimation with 3D echocardiography has been shown to be more accurate than 2D volume calculation. However, little is known about the possible effect of respiratory movements on the accuracy of the measurement. METHODS: 100 consecutive patients admitted...... with chest pain were examined with 3D echocardiography and LAV was quantified during inspiratory breath hold, expiratory breath hold and during free breathing. RESULTS: Of the 100 patients, only 65 had an echocardiographic window that allowed for 3D echocardiography in the entire respiratory cycle. Mean...

  12. 3D Seismic Imaging over a Potential Collapse Structure

    Science.gov (United States)

    Gritto, Roland; O'Connell, Daniel; Elobaid Elnaiem, Ali; Mohamed, Fathelrahman; Sadooni, Fadhil

    2016-04-01

    The Middle-East has seen a recent boom in construction including the planning and development of complete new sub-sections of metropolitan areas. Before planning and construction can commence, however, the development areas need to be investigated to determine their suitability for the planned project. Subsurface parameters such as the type of material (soil/rock), thickness of top soil or rock layers, depth and elastic parameters of basement, for example, comprise important information needed before a decision concerning the suitability of the site for construction can be made. A similar problem arises in environmental impact studies, when subsurface parameters are needed to assess the geological heterogeneity of the subsurface. Environmental impact studies are typically required for each construction project, particularly for the scale of the aforementioned building boom in the Middle East. The current study was conducted in Qatar at the location of a future highway interchange to evaluate a suite of 3D seismic techniques in their effectiveness to interrogate the subsurface for the presence of karst-like collapse structures. The survey comprised an area of approximately 10,000 m2 and consisted of 550 source- and 192 receiver locations. The seismic source was an accelerated weight drop while the geophones consisted of 3-component 10 Hz velocity sensors. At present, we analyzed over 100,000 P-wave phase arrivals and performed high-resolution 3-D tomographic imaging of the shallow subsurface. Furthermore, dispersion analysis of recorded surface waves will be performed to obtain S-wave velocity profiles of the subsurface. Both results, in conjunction with density estimates, will be utilized to determine the elastic moduli of the subsurface rock layers.

  13. Enhanced 3D fluorescence live cell imaging on nanoplasmonic substrate

    Energy Technology Data Exchange (ETDEWEB)

    Gartia, Manas Ranjan [Department of Nuclear, Plasma and Radiological Engineering, University of Illinois, Urbana, IL 61801 (United States); Hsiao, Austin; Logan Liu, G [Department of Bioengineering, University of Illinois, Urbana, IL 61801 (United States); Sivaguru, Mayandi [Institute for Genomic Biology, University of Illinois, Urbana, IL 61801 (United States); Chen Yi, E-mail: loganliu@illinois.edu [Department of Electrical and Computer Engineering, University of Illinois, Urbana, IL 61801 (United States)

    2011-09-07

    We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.

  14. Autostereoscopic 3D visualization and image processing system for neurosurgery.

    Science.gov (United States)

    Meyer, Tobias; Kuß, Julia; Uhlemann, Falk; Wagner, Stefan; Kirsch, Matthias; Sobottka, Stephan B; Steinmeier, Ralf; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    A demonstrator system for planning neurosurgical procedures was developed based on commercial hardware and software. The system combines an easy-to-use environment for surgical planning with high-end visualization and the opportunity to analyze data sets for research purposes. The demonstrator system is based on the software AMIRA. Specific algorithms for segmentation, elastic registration, and visualization have been implemented and adapted to the clinical workflow. Modules from AMIRA and the image processing library Insight Segmentation and Registration Toolkit (ITK) can be combined to solve various image processing tasks. Customized modules tailored to specific clinical problems can easily be implemented using the AMIRA application programming interface and a self-developed framework for ITK filters. Visualization is done via autostereoscopic displays, which provide a 3D impression without viewing aids. A Spaceball device allows a comfortable, intuitive way of navigation in the data sets. Via an interface to a neurosurgical navigation system, the demonstrator system can be used intraoperatively. The precision, applicability, and benefit of the demonstrator system for planning of neurosurgical interventions and for neurosurgical research were successfully evaluated by neurosurgeons using phantom and patient data sets.

  15. Multiframe image point matching and 3-d surface reconstruction.

    Science.gov (United States)

    Tsai, R Y

    1983-02-01

    This paper presents two new methods, the Joint Moment Method (JMM) and the Window Variance Method (WVM), for image matching and 3-D object surface reconstruction using multiple perspective views. The viewing positions and orientations for these perspective views are known a priori, as is usually the case for such applications as robotics and industrial vision as well as close range photogrammetry. Like the conventional two-frame correlation method, the JMM and WVM require finding the extrema of 1-D curves, which are proved to theoretically approach a delta function exponentially as the number of frames increases for the JMM and are much sharper than the two-frame correlation function for both the JMM and the WVM, even when the image point to be matched cannot be easily distinguished from some of the other points. The theoretical findings have been supported by simulations. It is also proved that JMM and WVM are not sensitive to certain radiometric effects. If the same window size is used, the computational complexity for the proposed methods is about n - 1 times that for the two-frame method where n is the number of frames. Simulation results show that the JMM and WVM require smaller windows than the two-frame correlation method with better accuracy, and therefore may even be more computationally feasible than the latter since the computational complexity increases quadratically as a function of the window size.

  16. Automatic Dent-landmark detection in 3-D CBCT dental volumes.

    Science.gov (United States)

    Cheng, Erkang; Chen, Jinwu; Yang, Jie; Deng, Huiyang; Wu, Yi; Megalooikonomou, Vasileios; Gable, Bryce; Ling, Haibin

    2011-01-01

    Orthodontic craniometric landmarks provide critical information in oral and maxillofacial imaging diagnosis and treatment planning. The Dent-landmark, defined as the odontoid process of the epistropheus, is one of the key landmarks to construct the midsagittal reference plane. In this paper, we propose a learning-based approach to automatically detect the Dent-landmark in the 3D cone-beam computed tomography (CBCT) dental data. Specifically, a detector is learned using the random forest with sampled context features. Furthermore, we use spacial prior to build a constrained search space other than use the full three dimensional space. The proposed method has been evaluated on a dataset containing 73 CBCT dental volumes and yields promising results.

  17. TU-CD-BRA-01: A Novel 3D Registration Method for Multiparametric Radiological Images

    Energy Technology Data Exchange (ETDEWEB)

    Akhbardeh, A [The Russell H. Morgan Department of Radiology and Radiological Sciences, The Johns Hopkins University School of Medicine, Baltimore, MD (United States); Parekth, VS [Department of Computer Science, The Johns Hopkins University, Baltimore, MD (United States); Jacobs, MA [The Russell H. Morgan Department of Radiology and Radiological Sciences and Sidney Kimmel Comprehensive Cancer Center, The Johns Hopkins University School of Medicine, Sparks, MD (United States)

    2015-06-15

    Purpose: Multiparametric and multimodality radiological imaging methods, such as, magnetic resonance imaging(MRI), computed tomography(CT), and positron emission tomography(PET), provide multiple types of tissue contrast and anatomical information for clinical diagnosis. However, these radiological modalities are acquired using very different technical parameters, e.g.,field of view(FOV), matrix size, and scan planes, which, can lead to challenges in registering the different data sets. Therefore, we developed a hybrid registration method based on 3D wavelet transformation and 3D interpolations that performs 3D resampling and rotation of the target radiological images without loss of information Methods: T1-weighted, T2-weighted, diffusion-weighted-imaging(DWI), dynamic-contrast-enhanced(DCE) MRI and PET/CT were used in the registration algorithm from breast and prostate data at 3T MRI and multimodality(PET/CT) cases. The hybrid registration scheme consists of several steps to reslice and match each modality using a combination of 3D wavelets, interpolations, and affine registration steps. First, orthogonal reslicing is performed to equalize FOV, matrix sizes and the number of slices using wavelet transformation. Second, angular resampling of the target data is performed to match the reference data. Finally, using optimized angles from resampling, 3D registration is performed using similarity transformation(scaling and translation) between the reference and resliced target volume is performed. After registration, the mean-square-error(MSE) and Dice Similarity(DS) between the reference and registered target volumes were calculated. Results: The 3D registration method registered synthetic and clinical data with significant improvement(p<0.05) of overlap between anatomical structures. After transforming and deforming the synthetic data, the MSE and Dice similarity were 0.12 and 0.99. The average improvement of the MSE in breast was 62%(0.27 to 0.10) and prostate was

  18. Automated 3D ultrasound image segmentation for assistant diagnosis of breast cancer

    Science.gov (United States)

    Wang, Yuxin; Gu, Peng; Lee, Won-Mean; Roubidoux, Marilyn A.; Du, Sidan; Yuan, Jie; Wang, Xueding; Carson, Paul L.

    2016-04-01

    Segmentation of an ultrasound image into functional tissues is of great importance to clinical diagnosis of breast cancer. However, many studies are found to segment only the mass of interest and not all major tissues. Differences and inconsistencies in ultrasound interpretation call for an automated segmentation method to make results operator-independent. Furthermore, manual segmentation of entire three-dimensional (3D) ultrasound volumes is time-consuming, resource-intensive, and clinically impractical. Here, we propose an automated algorithm to segment 3D ultrasound volumes into three major tissue types: cyst/mass, fatty tissue, and fibro-glandular tissue. To test its efficacy and consistency, the proposed automated method was employed on a database of 21 cases of whole breast ultrasound. Experimental results show that our proposed method not only distinguishes fat and non-fat tissues correctly, but performs well in classifying cyst/mass. Comparison of density assessment between the automated method and manual segmentation demonstrates good consistency with an accuracy of 85.7%. Quantitative comparison of corresponding tissue volumes, which uses overlap ratio, gives an average similarity of 74.54%, consistent with values seen in MRI brain segmentations. Thus, our proposed method exhibits great potential as an automated approach to segment 3D whole breast ultrasound volumes into functionally distinct tissues that may help to correct ultrasound speed of sound aberrations and assist in density based prognosis of breast cancer.

  19. 3D volumetry comparison using 3T magnetic resonance imaging between normal and adenoma-containing pituitary glands

    Directory of Open Access Journals (Sweden)

    Ernesto Roldan-Valadez

    2011-01-01

    Full Text Available Background: Computed-assisted three-dimensional data (3D allows for an accurate evaluation of volumes compared with traditional measurements. Aims: An in vitro method comparison between geometric volume and 3D volumetry to obtain reference data for pituitary volumes in normal pituitary glands (PGs and PGs containing adenomas. Design: Prospective, transverse, analytical study. Materials and Methods: Forty-eight subjects underwent brain magnetic resonance imaging (MRI with 3D sequencing for computer-aided volumetry. PG phantom volumes by both methods were compared. Using the best volumetric method, volumes of normal PGs and PGs with adenoma were compared. Statistical analysis used the Bland-Altman method, t-statistics, effect size and linear regression analysis. Results: Method comparison between 3D volumetry and geometric volume revealed a lower bias and precision for 3D volumetry. A total of 27 patients exhibited normal PGs (mean age, 42.07 ± 16.17 years, although length, height, width, geometric volume and 3D volumetry were greater in women than in men. A total of 21 patients exhibited adenomas (mean age 39.62 ± 10.79 years, and length, height, width, geometric volume and 3D volumetry were greater in men than in women, with significant volumetric differences. Age did not influence pituitary volumes on linear regression analysis. Conclusions: Results from the present study showed that 3D volumetry was more accurate than the geometric method. In addition, the upper normal limits of PGs overlapped with lower volume limits during early stage microadenomas.

  20. Intersection-based registration of slice stacks to form 3D images of the human fetal brain

    DEFF Research Database (Denmark)

    Kim, Kio; Hansen, Mads Fogtmann; Habas, Piotr;

    2008-01-01

    Clinical fetal MR imaging of the brain commonly makes use of fast 2D acquisitions of multiple sets of approximately orthogonal 2D slices. We and others have previously proposed an iterative slice-to-volume registration process to recover a geometrically consistent 3D image. However, these approac...

  1. Fast imaging of laboratory core floods using 3D compressed sensing RARE MRI

    Science.gov (United States)

    Ramskill, N. P.; Bush, I.; Sederman, A. J.; Mantle, M. D.; Benning, M.; Anger, B. C.; Appel, M.; Gladden, L. F.

    2016-09-01

    Three-dimensional (3D) imaging of the fluid distributions within the rock is essential to enable the unambiguous interpretation of core flooding data. Magnetic resonance imaging (MRI) has been widely used to image fluid saturation in rock cores; however, conventional acquisition strategies are typically too slow to capture the dynamic nature of the displacement processes that are of interest. Using Compressed Sensing (CS), it is possible to reconstruct a near-perfect image from significantly fewer measurements than was previously thought necessary, and this can result in a significant reduction in the image acquisition times. In the present study, a method using the Rapid Acquisition with Relaxation Enhancement (RARE) pulse sequence with CS to provide 3D images of the fluid saturation in rock core samples during laboratory core floods is demonstrated. An objective method using image quality metrics for the determination of the most suitable regularisation functional to be used in the CS reconstructions is reported. It is shown that for the present application, Total Variation outperforms the Haar and Daubechies3 wavelet families in terms of the agreement of their respective CS reconstructions with a fully-sampled reference image. Using the CS-RARE approach, 3D images of the fluid saturation in the rock core have been acquired in 16 min. The CS-RARE technique has been applied to image the residual water saturation in the rock during a water-water displacement core flood. With a flow rate corresponding to an interstitial velocity of vi = 1.89 ± 0.03 ft day-1, 0.1 pore volumes were injected over the course of each image acquisition, a four-fold reduction when compared to a fully-sampled RARE acquisition. Finally, the 3D CS-RARE technique has been used to image the drainage of dodecane into the water-saturated rock in which the dynamics of the coalescence of discrete clusters of the non-wetting phase are clearly observed. The enhancement in the temporal resolution

  2. Fast imaging of laboratory core floods using 3D compressed sensing RARE MRI.

    Science.gov (United States)

    Ramskill, N P; Bush, I; Sederman, A J; Mantle, M D; Benning, M; Anger, B C; Appel, M; Gladden, L F

    2016-09-01

    Three-dimensional (3D) imaging of the fluid distributions within the rock is essential to enable the unambiguous interpretation of core flooding data. Magnetic resonance imaging (MRI) has been widely used to image fluid saturation in rock cores; however, conventional acquisition strategies are typically too slow to capture the dynamic nature of the displacement processes that are of interest. Using Compressed Sensing (CS), it is possible to reconstruct a near-perfect image from significantly fewer measurements than was previously thought necessary, and this can result in a significant reduction in the image acquisition times. In the present study, a method using the Rapid Acquisition with Relaxation Enhancement (RARE) pulse sequence with CS to provide 3D images of the fluid saturation in rock core samples during laboratory core floods is demonstrated. An objective method using image quality metrics for the determination of the most suitable regularisation functional to be used in the CS reconstructions is reported. It is shown that for the present application, Total Variation outperforms the Haar and Daubechies3 wavelet families in terms of the agreement of their respective CS reconstructions with a fully-sampled reference image. Using the CS-RARE approach, 3D images of the fluid saturation in the rock core have been acquired in 16min. The CS-RARE technique has been applied to image the residual water saturation in the rock during a water-water displacement core flood. With a flow rate corresponding to an interstitial velocity of vi=1.89±0.03ftday(-1), 0.1 pore volumes were injected over the course of each image acquisition, a four-fold reduction when compared to a fully-sampled RARE acquisition. Finally, the 3D CS-RARE technique has been used to image the drainage of dodecane into the water-saturated rock in which the dynamics of the coalescence of discrete clusters of the non-wetting phase are clearly observed. The enhancement in the temporal resolution that has

  3. 3D mapping from high resolution satellite images

    Science.gov (United States)

    Goulas, D.; Georgopoulos, A.; Sarakenos, A.; Paraschou, Ch.

    2013-08-01

    In recent years 3D information has become more easily available. Users' needs are constantly increasing, adapting to this reality and 3D maps are in more demand. 3D models of the terrain in CAD or other environments have already been common practice; however one is bound by the computer screen. This is why contemporary digital methods have been developed in order to produce portable and, hence, handier 3D maps of various forms. This paper deals with the implementation of the necessary procedures to produce holographic 3D maps and three dimensionally printed maps. The main objective is the production of three dimensional maps from high resolution aerial and/or satellite imagery with the use of holography and but also 3D printing methods. As study area the island of Antiparos was chosen, as there were readily available suitable data. These data were two stereo pairs of Geoeye-1 and a high resolution DTM of the island. Firstly the theoretical bases of holography and 3D printing are described, and the two methods are analyzed and there implementation is explained. In practice a x-axis parallax holographic map of the Antiparos Island is created and a full parallax (x-axis and y-axis) holographic map is created and printed, using the holographic method. Moreover a three dimensional printed map of the study area has been created using 3dp (3d printing) method. The results are evaluated for their usefulness and efficiency.

  4. Simultaneous visualization of anatomical and functional 3D data by combining volume rendering and flow visualization

    Science.gov (United States)

    Schafhitzel, Tobias; Rößler, Friedemann; Weiskopf, Daniel; Ertl, Thomas

    2007-03-01

    Modern medical imaging provides a variety of techniques for the acquisition of multi-modality data. A typical example is the combination of functional and anatomical data from functional Magnetic Resonance Imaging (fMRI) and anatomical MRI measurements. Usually, the data resulting from each of these two methods is transformed to 3D scalar-field representations to facilitate visualization. A common method for the visualization of anatomical/functional multi-modalities combines semi-transparent isosurfaces (SSD, surface shaded display) with other scalar visualization techniques like direct volume rendering (DVR). However, partial occlusion and visual clutter that typically result from the overlay of these traditional 3D scalar-field visualization techniques make it difficult for the user to perceive and recognize visual structures. This paper addresses these perceptual issues by a new visualization approach for anatomical/functional multi-modalities. The idea is to reduce the occlusion effects of an isosurface by replacing its surface representation by a sparser line representation. Those lines are chosen along the principal curvature directions of the isosurface and rendered by a flow visualization method called line integral convolution (LIC). Applying the LIC algorithm results in fine line structures that improve the perception of the isosurface's shape in a way that it is possible to render it with small opacity values. An interactive visualization is achieved by executing the algorithm completely on the graphics processing unit (GPU) of modern graphics hardware. Furthermore, several illumination techniques and image compositing strategies are discussed for emphasizing the isosurface structure. We demonstrate our method for the example of fMRI/MRI measurements, visualizing the spatial relationship between brain activation and brain tissue.

  5. 基于3D-CT与4D-CT勾画保留乳房手术后全乳靶区的比较研究%Comparison study of clinical target volumes of whole breast after breast-conserving surgery based on three-dimensional CT and four-dimensional CT images

    Institute of Scientific and Technical Information of China (English)

    王素贞; 李建彬; 张英杰; 王玮; 李奉祥; 徐敏; 邵倩; 范廷勇; 刘同海

    2012-01-01

    Objective To study the differences of the clinical target volume ( CTV) based on three-dimensional CT (3D-CT) and four-dimensional CT (4D-CT) of the whole breast after breast-conserving surgery. Methods Thirteen patients after breast-conserving surgery underwent 3D-CT simulation scans followed by 4D-CT simulation scans of the thorax during free breathing. During 4D-CT scanning, real-time position management ( RPM ) system simultaneously recorded the respiratory signals. The CT images with respiratory signal data were reconstructed and sorted into 10 phase groups in a respiratory cycle. Data sets for 3D-CT and 4D-CT scans were then transferred to Eclipse treatment planning software. The 4D-CT image of the end-inhalation phase (TO) served as a background and the other nine phases ( T10 , T20 , T30··· T90 ) , maximum intensity projection ( MIP ) image and 3D-CT image were registered. The CTV were manually delineated on the registered images of the 3D-CT, TO, middle-exhalation (T20) , end-exhalation (T50) , MIP images based on the TO of 4D-CT by a radiation oncologist at two different times. Then the CTV3D , CTV0 , CTV10··· CTVMIP were delineated and defined on the 3D-CT, TO, T10···MIP images based on the TO images of 4D-CT by the same radiation oncologist. All the CTVs ( CTV0 , CTV10 , CTV10··· CTV90) delineated on the 10 phases of the 4D-CT images were fused into an internal clinical target volume (ICTV). The TO , T20 , T50 , MIP images were selected from the CTVs of the 4D-CT to compare with the 3D-CT image. The differences of the targets delineated on the same images by the same radiation oncologist at different times were compared. The volumes of the CTVS, the matching index ( MI) and the degree of inclusion ( DI) were compared respectively. Results There was no difference in the CTV delineated by the same oncologist no matter based on 3D-CT or 4D-CT( P>0. 050). The CTVs volumes of ten phases in 4D-CT were not impacted by respiratory movement( P>0. 05

  6. Volume Attenuation and High Frequency Loss as Auditory Depth Cues in Stereoscopic 3D Cinema

    Science.gov (United States)

    Manolas, Christos; Pauletto, Sandra

    2014-09-01

    Assisted by the technological advances of the past decades, stereoscopic 3D (S3D) cinema is currently in the process of being established as a mainstream form of entertainment. The main focus of this collaborative effort is placed on the creation of immersive S3D visuals. However, with few exceptions, little attention has been given so far to the potential effect of the soundtrack on such environments. The potential of sound both as a means to enhance the impact of the S3D visual information and to expand the S3D cinematic world beyond the boundaries of the visuals is large. This article reports on our research into the possibilities of using auditory depth cues within the soundtrack as a means of affecting the perception of depth within cinematic S3D scenes. We study two main distance-related auditory cues: high-end frequency loss and overall volume attenuation. A series of experiments explored the effectiveness of these auditory cues. Results, although not conclusive, indicate that the studied auditory cues can influence the audience judgement of depth in cinematic 3D scenes, sometimes in unexpected ways. We conclude that 3D filmmaking can benefit from further studies on the effectiveness of specific sound design techniques to enhance S3D cinema.

  7. 3D Imaging of Dead Sea Area Using Weighted Multipath Summation: A Case Study

    Directory of Open Access Journals (Sweden)

    Shemer Keydar

    2013-01-01

    Full Text Available The formation of sinkholes along the Dead Sea is caused by the rapid decline of the Dead Sea level, as a possible result of human extensive activity. According to one of the geological models, the sinkholes in several sites are clustered along a narrow coastal strip developing along lineaments representing faults in NNW direction. In order to understand the relationship between a developing sinkhole and its tectonic environment, a high-resolution (HR three-dimensional (3D seismic reflection survey was carried out at the western shoreline of the Dead Sea. A recently developed 3D imaging approach was applied to this 3D dataset. Imaging of subsurface is performed by a spatial summation of seismic waves along time surfaces using recently proposed multipath summation with proper weights. The multipath summation is performed by stacking the target waves along all possible time surfaces having a common apex at the given point. This approach does not require any explicit information on parameters since the involved multipath summation is performed for all possible parameters values within a wide specified range. The results from processed 3D time volume show subhorizontal coherent reflectors at approximate depth of 50–80 m which incline on closer location to the exposed sinkhole and suggest a possible linkage between revealed fault and the sinkholes.

  8. Tangible 3D printouts of scientific data volumes with FOSS - an emerging field for research

    Science.gov (United States)

    Löwe, Peter; Klump, Jens; Wickert, Jens; Ludwig, Marcel; Frigeri, Alessandro

    2013-04-01

    Humans are very good in using both hands and eyes for tactile pattern recognition: The german verb for understanding, "begreifen" literally means "getting a (tactile) grip on a matter". This proven and time honoured concept has been in use since prehistoric times. While the amount of scientific data continues to grow, researchers still need all the support to help them visualize the data content before their inner eye. Immersive data-visualisations are helpful, yet fail to provide tactile feedback as provided from tangible objects. The need for tangible representations of geospatial information to solve real world problems eventually led to the advent of 3d-globes by M. Behaim in the 15th century and has continued since. The production of a tangible representation of a scientific data set with some fidelity is just the final step of an arc, leading from the physical world into scientific reasoning and back: The process starts with a physical observation, or a model, by a sensor which produces a data stream which is turned into a geo-referenced data set. This data is turned into a volume representation which is converted into command sequences for the printing device, leading to the creation of a 3d-printout. Finally, the new specimen has to be linked to its metadata to ensure its scientific meaning and context. On the technical side, the production of a tangible data-print has been realized as a pilot workflow based on the Free and Open Source Geoinformatics tools GRASS GIS and Paraview to convert scientific data volume into stereolithography datasets (stl) for printing on a RepRap printer. The initial motivation to use tangible representations of complex data was the task of quality assessments on tsunami simulation data sets in the FP7 TRIDEC project (www.tridec-online.eu). For this, 3d-prints of space time cubes of tsunami wave spreading patterns were produced. This was followed by print-outs of volume data derived from radar sounders (MARSIS, SHARAD) imaging

  9. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    Science.gov (United States)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  10. Terahertz Quantum Cascade Laser Based 3D Imaging Project

    Data.gov (United States)

    National Aeronautics and Space Administration — LongWave Photonics proposes a terahertz quantum-cascade laser based swept-source optical coherence tomography (THz SS-OCT) system for single-sided, 3D,...

  11. Holographic Image Plane Projection Integral 3D Display

    Data.gov (United States)

    National Aeronautics and Space Administration — In response to NASA's need for a 3D virtual reality environment providing scientific data visualization without special user devices, Physical Optics Corporation...

  12. Assessing 3D tunnel position in ACL reconstruction using a novel single image 3D-2D registration

    Science.gov (United States)

    Kang, X.; Yau, W. P.; Otake, Y.; Cheung, P. Y. S.; Hu, Y.; Taylor, R. H.

    2012-02-01

    The routinely used procedure for evaluating tunnel positions following anterior cruciate ligament (ACL) reconstructions based on standard X-ray images is known to pose difficulties in terms of obtaining accurate measures, especially in providing three-dimensional tunnel positions. This is largely due to the variability in individual knee joint pose relative to X-ray plates. Accurate results were reported using postoperative CT. However, its extensive usage in clinical routine is hampered by its major requirement of having CT scans of individual patients, which is not available for most ACL reconstructions. These difficulties are addressed through the proposed method, which aligns a knee model to X-ray images using our novel single-image 3D-2D registration method and then estimates the 3D tunnel position. In the proposed method, the alignment is achieved by using a novel contour-based 3D-2D registration method wherein image contours are treated as a set of oriented points. However, instead of using some form of orientation weighting function and multiplying it with a distance function, we formulate the 3D-2D registration as a probability density estimation using a mixture of von Mises-Fisher-Gaussian (vMFG) distributions and solve it through an expectation maximization (EM) algorithm. Compared with the ground-truth established from postoperative CT, our registration method in an experiment using a plastic phantom showed accurate results with errors of (-0.43°+/-1.19°, 0.45°+/-2.17°, 0.23°+/-1.05°) and (0.03+/-0.55, -0.03+/-0.54, -2.73+/-1.64) mm. As for the entry point of the ACL tunnel, one of the key measurements, it was obtained with high accuracy of 0.53+/-0.30 mm distance errors.

  13. PSF Rotation with Changing Defocus and Applications to 3D Imaging for Space Situational Awareness

    Science.gov (United States)

    Prasad, S.; Kumar, R.

    2013-09-01

    For a clear, well corrected imaging aperture in space, the point-spread function (PSF) in its Gaussian image plane has the conventional, diffraction-limited, tightly focused Airy form. Away from that plane, the PSF broadens rapidly, however, resulting in a loss of sensitivity and transverse resolution that makes such a traditional best-optics approach untenable for rapid 3D image acquisition. One must scan in focus to maintain high sensitivity and resolution as one acquires image data, slice by slice, from a 3D volume with reduced efficiency. In this paper we describe a computational-imaging approach to overcome this limitation, one that uses pupil-phase engineering to fashion a PSF that, although not as tight as the Airy spot, maintains its shape and size while rotating uniformly with changing defocus over many waves of defocus phase at the pupil edge. As one of us has shown recently [1], the subdivision of a circular pupil aperture into M Fresnel zones, with the mth zone having an outer radius proportional to m and impressing a spiral phase profile of form m? on the light wave, where ? is the azimuthal angle coordinate measured from a fixed x axis (the dislocation line), yields a PSF that rotates with defocus while keeping its shape and size. Physically speaking, a nonzero defocus of a point source means a quadratic optical phase in the pupil that, because of the square-root dependence of the zone radius on the zone number, increases on average by the same amount from one zone to the next. This uniformly incrementing phase yields, in effect, a rotation of the dislocation line, and thus a rotated PSF. Since the zone-to-zone phase increment depends linearly on defocus to first order, the PSF rotates uniformly with changing defocus. For an M-zone pupil, a complete rotation of the PSF occurs when the defocus-induced phase at the pupil edge changes by M waves. Our recent simulations of reconstructions from image data for 3D image scenes comprised of point sources at

  14. Multi-shot turbo spin-echo for 3D vascular space occupancy imaging.

    Science.gov (United States)

    Cretti, Fabiola R; Summers, Paul E; Porro, Carlo A

    2013-07-01

    Vascular space occupancy (VASO) is a magnetic resonance imaging technique sensitive to cerebral blood volume, and is a potential alternative to the blood oxygenation level dependent (BOLD) sensitive technique as a basis for functional mapping of the neurovascular response to a task. Many implementations of VASO have made use of echo-planar imaging strategies that allow rapid acquisition, but risk introducing potentially confounding BOLD effects. Recently, multi-slice and 3D VASO techniques have been implemented to increase the imaging volume beyond the single slice of early reports. These techniques usually rely, however, on advanced scanner software or hardware not yet available in many centers. In the present study, we have implemented a short-echo time, multi-shot 3D Turbo Spin-Echo (TSE) VASO sequence that provided 8-slice coverage on a routine clinical scanner. The proposed VASO sequence was tested in assessing the response of the human motor cortex during a block design finger tapping task in 10 healthy subjects. Significant VASO responses, inversely correlated with the task, were found at both individual and group level. The location and extent of VASO responses were in close correspondence to those observed using a conventional BOLD acquisition in the same subjects. Although the spatial coverage and temporal resolution achieved were limited, robust and consistent VASO responses were observed. The use of a susceptibility insensitive volumetric TSE VASO sequence may have advantages in locations where conventional BOLD and echo-planar based VASO imaging is compromised.

  15. 3-D Imaging Systems for Agricultural Applications—A Review

    Directory of Open Access Journals (Sweden)

    Manuel Vázquez-Arellano

    2016-04-01

    Full Text Available Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  16. 3-D Imaging Systems for Agricultural Applications—A Review

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  17. 3-D Imaging Systems for Agricultural Applications-A Review.

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-04-29

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  18. Variation in the measurement of cranial volume and surface area using 3D laser scanning technology.

    Science.gov (United States)

    Sholts, Sabrina B; Wärmländer, Sebastian K T S; Flores, Louise M; Miller, Kevin W P; Walker, Phillip L

    2010-07-01

    Three-dimensional (3D) laser scanner models of human crania can be used for forensic facial reconstruction, and for obtaining craniometric data useful for estimating age, sex, and population affinity of unidentified human remains. However, the use of computer-generated measurements in a casework setting requires the measurement precision to be known. Here, we assess the repeatability and precision of cranial volume and surface area measurements using 3D laser scanner models created by different operators using different protocols for collecting and processing data. We report intraobserver measurement errors of 0.2% and interobserver errors of 2% of the total area and volume values, suggesting that observer-related errors do not pose major obstacles for sharing, combining, or comparing such measurements. Nevertheless, as no standardized procedure exists for area or volume measurements from 3D models, it is imperative to report the scanning and postscanning protocols employed when such measurements are conducted in a forensic setting.

  19. Dense 3d Point Cloud Generation from Uav Images from Image Matching and Global Optimazation

    Science.gov (United States)

    Rhee, S.; Kim, T.

    2016-06-01

    3D spatial information from unmanned aerial vehicles (UAV) images is usually provided in the form of 3D point clouds. For various UAV applications, it is important to generate dense 3D point clouds automatically from over the entire extent of UAV images. In this paper, we aim to apply image matching for generation of local point clouds over a pair or group of images and global optimization to combine local point clouds over the whole region of interest. We tried to apply two types of image matching, an object space-based matching technique and an image space-based matching technique, and to compare the performance of the two techniques. The object space-based matching used here sets a list of candidate height values for a fixed horizontal position in the object space. For each height, its corresponding image point is calculated and similarity is measured by grey-level correlation. The image space-based matching used here is a modified relaxation matching. We devised a global optimization scheme for finding optimal pairs (or groups) to apply image matching, defining local match region in image- or object- space, and merging local point clouds into a global one. For optimal pair selection, tiepoints among images were extracted and stereo coverage network was defined by forming a maximum spanning tree using the tiepoints. From experiments, we confirmed that through image matching and global optimization, 3D point clouds were generated successfully. However, results also revealed some limitations. In case of image-based matching results, we observed some blanks in 3D point clouds. In case of object space-based matching results, we observed more blunders than image-based matching ones and noisy local height variations. We suspect these might be due to inaccurate orientation parameters. The work in this paper is still ongoing. We will further test our approach with more precise orientation parameters.

  20. Superimposing of virtual graphics and real image based on 3D CAD information

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Proposes methods of transforming 3D CAD models into 2D graphics and recognizing 3D objects by features and superimposing VE built in computer onto real image taken by a CCD camera, and presents computer simulation results.

  1. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    Science.gov (United States)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  2. Scalable, high-performance 3D imaging software platform: system architecture and application to virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system.

  3. 3D DWT-DCT and Logistic MAP Based Robust Watermarking for Medical Volume Data.

    Science.gov (United States)

    Li, Jingbing; Liu, Yaoli; Zhong, Jiling

    2014-01-01

    Applying digital watermarking technique for the security protection of medical information systems is a hotspot of research in recent years. In this paper, we present a robust watermarking algorithm for medical volume data using 3D DWT-DCT and Logistic Map. After applying Logistic Map to enhance the security of watermarking, the visual feature vector of medical volume data is obtained using 3D DWT-DCT. Combining the feature vector, the third party concept and Hash function, a zero-watermarking scheme can be achieved. The proposed algorithm can mitigate the illogicality between robustness and invisibility. The experiment results show that the proposed algorithm is robust to common and geometrical attacks.

  4. Multimodal Registration and Fusion for 3D Thermal Imaging

    Directory of Open Access Journals (Sweden)

    Moulay A. Akhloufi

    2015-01-01

    Full Text Available 3D vision is an area of computer vision that has attracted a lot of research interest and has been widely studied. In recent years we witness an increasing interest from the industrial community. This interest is driven by the recent advances in 3D technologies, which enable high precision measurements at an affordable cost. With 3D vision techniques we can conduct advanced manufactured parts inspections and metrology analysis. However, we are not able to detect subsurface defects. This kind of detection is achieved by other techniques, like infrared thermography. In this work, we present a new registration framework for 3D and thermal infrared multimodal fusion. The resulting fused data can be used for advanced 3D inspection in Nondestructive Testing and Evaluation (NDT&E applications. The fusion permits the simultaneous visible surface and subsurface inspections to be conducted in the same process. Experimental tests were conducted with different materials. The obtained results are promising and show how these new techniques can be used efficiently in a combined NDT&E-Metrology analysis of manufactured parts, in areas such as aerospace and automotive.

  5. Using Rotation for Steerable Needle Detection in 3D Color-Doppler Ultrasound Images

    OpenAIRE

    Mignon, Paul; Poignet, Philippe; Troccaz, Jocelyne

    2015-01-01

    International audience; This paper demonstrates a new way to detect needles in 3D color-Doppler volumes of biological tissues. It uses rotation to generate vibrations of a needle using an existing robotic brachytherapy system. The results of our detection for color-Doppler and B-Mode ultrasound are compared to a needle location reference given by robot odometry and robot ultrasound calibration. Average errors between detection and reference are 5.8 mm on needle tip for B-Mode images and 2.17 ...

  6. Comparison between 3D-CTA with volume reconstruction and 3D-DSA in diagnosis of acute rupture of minute cerebral aneurysms%容积重建成像3D-CTA与3D-DSA在诊断急性破裂性颅内微小动脉瘤的研究

    Institute of Scientific and Technical Information of China (English)

    曾少建; 舒航; 陈光忠; 李昭杰; 詹升全; 林晓风; 周东

    2010-01-01

    Objective To evaluate the diagnostic value of three dimensional computed tomographic angiog-raphy (3D-CTA) with volume reconstruction (VR) and 3D digital subtraction angiography (3D-DSA) in diagnosis of minimal cerebral aneurysms. Method A total of 174 patients in Guangdong General Hospital, May 2007 to November 2008, of subarachnoid hemorrhage were checked upon the original imaging obtained by GE' s Light Speed Plus 64 volume spiral CT scanner at first, and then by the means of using Volume rendering (VR) three dimen-sional reconstruction and assisting the use of multiple planar reconstruction (MPR) to complete the 3D-DSA imag-ing at last. The volume rendering (VR) was assessed. Results Eleven very small cerebral aneurysms in 174 pa-tients with subarachnoid hemorrhage were diagnosed by CTA and 10 of them by 3D-DSA. Finally, all of 11 patients were confirmed by intracranial operations. The 3D-CTA (VR) clearly showed the shape and size of the intracranial aneurysms and their relationship to adjacent structures as well. There was no significant difference in the diagnosis of very small cerebral aneurysms between 3D-DSA and 3D-CTA(VR). Conclusions The 3D-CTA (VR) is a re-liable and rapid non-invasive diagnostic device for very small intracranial aneurysms. For the emergency operation,3D-CTA (VR) can provide more detailed imaging information to help the development of treatment strategy.%目的 对比研究容积重建成像三维CT血管造影与三维DSA(3D-DSA)在颅内微小动脉瘤诊疗中的临床应用价值.方法 对广东省人民医院2007年5月至2008年11月收治的174例蛛网膜下腔出血患者首先采用采用GE公司的Light Speed Plus 64排容积螺旋CT机获得原始图像,采用容积重建成像技术(VR)进行三维重建.并辅助运用多轴面重建(MPR),然后再行全脑血管造影术,并行3D-DSA成像.结果 本组174例蛛网膜下腔出血患者诊断为颅内微小动脉瘤11例,均经开颅手术证实;其中CTA诊断11例,3D

  7. High-Performance 3D Image Processing Architectures for Image-Guided Interventions

    Science.gov (United States)

    2008-01-01

    Circuits and Systems, vol. 1 (2), 2007, pp. 116-127. iv • O. Dandekar, C. Castro- Pareja , and R. Shekhar, “FPGA-based real-time 3D image...How low can we go?,” presented at IEEE International Symposium on Biomedical Imaging, 2006, pp. 502-505. • C. R. Castro- Pareja , O. Dandekar, and R...Venugopal, C. R. Castro- Pareja , and O. Dandekar, “An FPGA-based 3D image processor with median and convolution filters for real-time applications,” in

  8. Robust Reconstruction and Generalized Dual Hahn Moments Invariants Extraction for 3D Images

    Science.gov (United States)

    Mesbah, Abderrahim; Zouhri, Amal; El Mallahi, Mostafa; Zenkouar, Khalid; Qjidaa, Hassan

    2017-03-01

    In this paper, we introduce a new set of 3D weighed dual Hahn moments which are orthogonal on a non-uniform lattice and their polynomials are numerically stable to scale, consequent, producing a set of weighted orthonormal polynomials. The dual Hahn is the general case of Tchebichef and Krawtchouk, and the orthogonality of dual Hahn moments eliminates the numerical approximations. The computational aspects and symmetry property of 3D weighed dual Hahn moments are discussed in details. To solve their inability to invariability of large 3D images, which cause to overflow issues, a generalized version of these moments noted 3D generalized weighed dual Hahn moment invariants are presented where whose as linear combination of regular geometric moments. For 3D pattern recognition, a generalized expression of 3D weighted dual Hahn moment invariants, under translation, scaling and rotation transformations, have been proposed where a new set of 3D-GWDHMIs have been provided. In experimental studies, the local and global capability of free and noisy 3D image reconstruction of the 3D-WDHMs has been compared with other orthogonal moments such as 3D Tchebichef and 3D Krawtchouk moments using Princeton Shape Benchmark database. On pattern recognition using the 3D-GWDHMIs like 3D object descriptors, the experimental results confirm that the proposed algorithm is more robust than other orthogonal moments for pattern classification of 3D images with and without noise.

  9. Adaptive clutter rejection for 3D color Doppler imaging: preliminary clinical study.

    Science.gov (United States)

    Yoo, Yang Mo; Sikdar, Siddhartha; Karadayi, Kerem; Kolokythas, Orpheus; Kim, Yongmin

    2008-08-01

    In three-dimensional (3D) ultrasound color Doppler imaging (CDI), effective rejection of flash artifacts caused by tissue motion (clutter) is important for improving sensitivity in visualizing blood flow in vessels. Since clutter characteristics can vary significantly during volume acquisition, a clutter rejection technique that can adapt to the underlying clutter conditions is desirable for 3D CDI. We have previously developed an adaptive clutter rejection (ACR) method, in which an optimum filter is dynamically selected from a set of predesigned clutter filters based on the measured clutter characteristics. In this article, we evaluated the ACR method with 3D in vivo data acquired from 37 kidney transplant patients clinically indicated for a duplex ultrasound examination. We compared ACR against a conventional clutter rejection method, down-mixing (DM), using a commonly-used flow signal-to-clutter ratio (SCR) and a new metric called fractional residual clutter area (FRCA). The ACR method was more effective in removing the flash artifacts while providing higher sensitivity in detecting blood flow in the arcuate arteries and veins in the parenchyma of transplanted kidneys. ACR provided 3.4 dB improvement in SCR over the DM method (11.4 +/- 1.6 dB versus 8.0 +/- 2.0 dB, p < 0.001) and had lower average FRCA values compared with the DM method (0.006 +/- 0.003 versus 0.036 +/- 0.022, p < 0.001) for all study subjects. These results indicate that the new ACR method is useful for removing nonstationary tissue motion while improving the image quality for visualizing 3D vascular structure in 3D CDI.

  10. 2D-3D Registration of CT Vertebra Volume to Fluoroscopy Projection: A Calibration Model Assessment

    Directory of Open Access Journals (Sweden)

    Allen R

    2010-01-01

    Full Text Available This study extends a previous research concerning intervertebral motion registration by means of 2D dynamic fluoroscopy to obtain a more comprehensive 3D description of vertebral kinematics. The problem of estimating the 3D rigid pose of a CT volume of a vertebra from its 2D X-ray fluoroscopy projection is addressed. 2D-3D registration is obtained maximising a measure of similarity between Digitally Reconstructed Radiographs (obtained from the CT volume and real fluoroscopic projection. X-ray energy correction was performed. To assess the method a calibration model was realised a sheep dry vertebra was rigidly fixed to a frame of reference including metallic markers. Accurate measurement of 3D orientation was obtained via single-camera calibration of the markers and held as true 3D vertebra position; then, vertebra 3D pose was estimated and results compared. Error analysis revealed accuracy of the order of 0.1 degree for the rotation angles of about 1 mm for displacements parallel to the fluoroscopic plane, and of order of 10 mm for the orthogonal displacement.

  11. 3D functional ultrasound imaging of the cerebral visual system in rodents.

    Science.gov (United States)

    Gesnik, Marc; Blaize, Kevin; Deffieux, Thomas; Gennisson, Jean-Luc; Sahel, José-Alain; Fink, Mathias; Picaud, Serge; Tanter, Mickaël

    2017-02-03

    3D functional imaging of the whole brain activity during visual task is a challenging task in rodents due to the complex tri-dimensional shape of involved brain regions and the fine spatial and temporal resolutions required to reveal the visual tract. By coupling functional ultrasound (fUS) imaging with a translational motorized stage and an episodic visual stimulation device, we managed to accurately map and to recover the activity of the visual cortices, the Superior Colliculus (SC) and the Lateral Geniculate Nuclei (LGN) in 3D. Cerebral Blood Volume (CBV) responses during visual stimuli were found to be highly correlated with the visual stimulus time profile in visual cortices (r=0.6), SC (r=0.7) and LGN (r=0.7). These responses were found dependent on flickering frequency and contrast, and optimal stimulus parameters for largest CBV increases were obtained. In particular, increasing the flickering frequency higher than 7Hz revealed a decrease of visual cortices response while the SC response was preserved. Finally, cross-correlation between CBV signals exhibited significant delays (d=0.35s +/-0.1s) between blood volume response in SC and visual cortices in response to our visual stimulus. These results emphasize the interest of fUS imaging as a whole brain neuroimaging modality for brain vision studies in rodent models.

  12. Display of travelling 3D scenes from single integral-imaging capture

    Science.gov (United States)

    Martinez-Corral, Manuel; Dorado, Adrian; Hong, Seok-Min; Sola-Pikabea, Jorge; Saavedra, Genaro

    2016-06-01

    Integral imaging (InI) is a 3D auto-stereoscopic technique that captures and displays 3D images. We present a method for easily projecting the information recorded with this technique by transforming the integral image into a plenoptic image, as well as choosing, at will, the field of view (FOV) and the focused plane of the displayed plenoptic image. Furthermore, with this method we can generate a sequence of images that simulates a camera travelling through the scene from a single integral image. The application of this method permits to improve the quality of 3D display images and videos.

  13. 3D fingerprint imaging system based on full-field fringe projection profilometry

    Science.gov (United States)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  14. A new approach of building 3D visualization framework for multimodal medical images display and computed assisted diagnosis

    Science.gov (United States)

    Li, Zhenwei; Sun, Jianyong; Zhang, Jianguo

    2012-02-01

    As more and more CT/MR studies are scanning with larger volume of data sets, more and more radiologists and clinician would like using PACS WS to display and manipulate these larger data sets of images with 3D rendering features. In this paper, we proposed a design method and implantation strategy to develop 3D image display component not only with normal 3D display functions but also with multi-modal medical image fusion as well as compute-assisted diagnosis of coronary heart diseases. The 3D component has been integrated into the PACS display workstation of Shanghai Huadong Hospital, and the clinical practice showed that it is easy for radiologists and physicians to use these 3D functions such as multi-modalities' (e.g. CT, MRI, PET, SPECT) visualization, registration and fusion, and the lesion quantitative measurements. The users were satisfying with the rendering speeds and quality of 3D reconstruction. The advantages of the component include low requirements for computer hardware, easy integration, reliable performance and comfortable application experience. With this system, the radiologists and the clinicians can manipulate with 3D images easily, and use the advanced visualization tools to facilitate their work with a PACS display workstation at any time.

  15. Accuracy of 3D Imaging Software in Cephalometric Analysis

    Science.gov (United States)

    2013-06-21

    orthodontic software program ( Dolphin 3D, mfg, city, state) used for measurement and analysis of craniofacial dimensions. Three-dimensional reconstructions...143(8), 899-902. Baik H, Jeon J, Lee H. (2007). Facial soft tissue analysis of Korean adults with normal occlusion using a 3-dimensional laser

  16. 3D Imaging Technology’s Narrative Appropriation in Cinema

    NARCIS (Netherlands)

    Kiss, Miklós; van den Oever, Annie; Fossati, Giovanna

    2016-01-01

    This chapter traces the cinematic history of stereoscopy by focusing on the contemporary dispute about the values of 3D technology, which are seen as either mere visual attraction or as a technique that perfects the cinematic illusion through increasing perceptual immersion. By taking a neutral stan

  17. Statistical skull models from 3D X-ray images

    CERN Document Server

    Berar, M; Bailly, G; Payan, Y; Berar, Maxime; Desvignes, Michel; Payan, Yohan

    2006-01-01

    We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes of the skull and the mandible are high-density meshes, extracted from 3D CT scans. All our patient-specific meshes are registrated in a subject-shared reference system using our 3D-to-3D elastic matching algorithm. Registration is based upon the minimization of a distance between the high density mesh and a shared low density mesh, defined on the vertexes, in a multi resolution approach. A Principal Component analysis is performed on the normalised registrated data to build a statistical linear model of the skull and mandible shape variation. The accuracy of the reconstruction is under the millimetre in the shape...

  18. 3D-printing of undisturbed soil imaged by X-ray

    Science.gov (United States)

    Bacher, Matthias; Koestel, John; Schwen, Andreas

    2014-05-01

    The unique pore structures in Soils are altered easily by water flow. Each sample has a different morphology and the results of repetitions vary as well. Soil macropores in 3D-printed durable material avoid erosion and have a known morphology. Therefore potential and limitations of reproducing an undisturbed soil sample by 3D-printing was evaluated. We scanned an undisturbed soil column of Ultuna clay soil with a diameter of 7 cm by micro X-ray computer tomography at a resolution of 51 micron. A subsample cube of 2.03 cm length with connected macropores was cut out from this 3D-image and printed in five different materials by a 3D-printing service provider. The materials were ABS, Alumide, High Detail Resin, Polyamide and Prime Grey. The five print-outs of the subsample were tested on their hydraulic conductivity by using the falling head method. The hydrophobicity was tested by an adapted sessile drop method. To determine the morphology of the print-outs and compare it to the real soil also the print-outs were scanned by X-ray. The images were analysed with the open source program ImageJ. The five 3D-image print-outs copied from the subsample of the soil column were compared by means of their macropore network connectivity, porosity, surface volume, tortuosity and skeleton. The comparison of pore morphology between the real soil and the print-outs showed that Polyamide reproduced the soil macropore structure best while Alumide print-out was the least detailed. Only the largest macropore was represented in all five print-outs. Printing residual material or printing aid material remained in and clogged the pores of all print-out materials apart from Prime Grey. Therefore infiltration was blocked in these print-outs and the materials are not suitable even though the 3D-printed pore shapes were well reproduced. All of the investigated materials were insoluble. The sessile drop method showed angles between 53 and 85 degrees. Prime Grey had the fastest flow rate; the

  19. Advantages and disadvantages of 3D ultrasound of thyroid nodules including thin slice volume rendering

    Directory of Open Access Journals (Sweden)

    Slapa Rafal

    2011-01-01

    Full Text Available Abstract Background The purpose of this study was to assess the advantages and disadvantages of 3D gray-scale and power Doppler ultrasound, including thin slice volume rendering (TSVR, applied for evaluation of thyroid nodules. Methods The retrospective evaluation by two observers of volumes of 71 thyroid nodules (55 benign, 16 cancers was performed using a new TSVR technique. Dedicated 4D ultrasound scanner with an automatic 6-12 MHz 4D probe was used. Statistical analysis was performed with Stata v. 8.2. Results Multiple logistic regression analysis demonstrated that independent risk factors of thyroid cancers identified by 3D ultrasound include: (a ill-defined borders of the nodule on MPR presentation, (b a lobulated shape of the nodule in the c-plane and (c a density of central vessels in the nodule within the minimal or maximal ranges. Combination of features provided sensitivity 100% and specificity 60-69% for thyroid cancer. Calcification/microcalcification-like echogenic foci on 3D ultrasound proved not to be a risk factor of thyroid cancer. Storage of the 3D data of the whole nodules enabled subsequent evaluation of new parameters and with new rendering algorithms. Conclusions Our results indicate that 3D ultrasound is a practical and reproducible method for the evaluation of thyroid nodules. 3D ultrasound stores volumes comprising the whole lesion or organ. Future detailed evaluations of the data are possible, looking for features that were not fully appreciated at the time of collection or applying new algorithms for volume rendering in order to gain important information. Three-dimensional ultrasound data could be included in thyroid cancer databases. Further multicenter large scale studies are warranted.

  20. Using rotation for steerable needle detection in 3D color-Doppler ultrasound images.

    Science.gov (United States)

    Mignon, Paul; Poignet, Philippe; Troccaz, Jocelyne

    2015-08-01

    This paper demonstrates a new way to detect needles in 3D color-Doppler volumes of biological tissues. It uses rotation to generate vibrations of a needle using an existing robotic brachytherapy system. The results of our detection for color-Doppler and B-Mode ultrasound are compared to a needle location reference given by robot odometry and robot ultrasound calibration. Average errors between detection and reference are 5.8 mm on needle tip for B-Mode images and 2.17 mm for color-Doppler images. These results show that color-Doppler imaging leads to more robust needle detection in noisy environment with poor needle visibility or when needle interacts with other objects.

  1. Three-dimensional analysis of alveolar bone resorption by image processing of 3-D dental CT images

    Science.gov (United States)

    Nagao, Jiro; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Yamada, Shohzoh; Naitoh, Munetaka

    2006-03-01

    We have developed a novel system that provides total support for assessment of alveolar bone resorption, caused by periodontitis, based on three-dimensional (3-D) dental CT images. In spite of the difficulty in perceiving the complex 3-D shape of resorption, dentists assessing resorption location and severity have been relying on two-dimensional radiography and probing, which merely provides one-dimensional information (depth) about resorption shape. However, there has been little work on assisting assessment of the disease by 3-D image processing and visualization techniques. This work provides quantitative evaluation results and figures for our system that measures the three-dimensional shape and spread of resorption. It has the following functions: (1) measures the depth of resorption by virtually simulating probing in the 3-D CT images, taking advantage of image processing of not suffering obstruction by teeth on the inter-proximal sides and much smaller measurement intervals than the conventional examination; (2) visualizes the disposition of the depth by movies and graphs; (3) produces a quantitative index and intuitive visual representation of the spread of resorption in the inter-radicular region in terms of area; and (4) calculates the volume of resorption as another severity index in the inter-radicular region and the region outside it. Experimental results in two cases of 3-D dental CT images and a comparison of the results with the clinical examination results and experts' measurements of the corresponding patients confirmed that the proposed system gives satisfying results, including 0.1 to 0.6mm of resorption measurement (probing) error and fairly intuitive presentation of measurement and calculation results.

  2. 360 degree realistic 3D image display and image processing from real objects

    Science.gov (United States)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-09-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  3. 360 degree realistic 3D image display and image processing from real objects

    Science.gov (United States)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-12-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  4. Correlative nanoscale 3D imaging of structure and composition in extended objects.

    Directory of Open Access Journals (Sweden)

    Feng Xu

    Full Text Available Structure and composition at the nanoscale determine the behavior of biological systems and engineered materials. The drive to understand and control this behavior has placed strong demands on developing methods for high resolution imaging. In general, the improvement of three-dimensional (3D resolution is accomplished by tightening constraints: reduced manageable specimen sizes, decreasing analyzable volumes, degrading contrasts, and increasing sample preparation efforts. Aiming to overcome these limitations, we present a non-destructive and multiple-contrast imaging technique, using principles of X-ray laminography, thus generalizing tomography towards laterally extended objects. We retain advantages that are usually restricted to 2D microscopic imaging, such as scanning of large areas and subsequent zooming-in towards a region of interest at the highest possible resolution. Our technique permits correlating the 3D structure and the elemental distribution yielding a high sensitivity to variations of the electron density via coherent imaging and to local trace element quantification through X-ray fluorescence. We demonstrate the method by imaging a lithographic nanostructure and an aluminum alloy. Analyzing a biological system, we visualize in lung tissue the subcellular response to toxic stress after exposure to nanotubes. We show that most of the nanotubes are trapped inside alveolar macrophages, while a small portion of the nanotubes has crossed the barrier to the cellular space of the alveolar wall. In general, our method is non-destructive and can be combined with different sample environmental or loading conditions. We therefore anticipate that correlative X-ray nano-laminography will enable a variety of in situ and in operando 3D studies.

  5. Confocal Image 3D Surface Measurement with Optical Fiber Plate

    Institute of Scientific and Technical Information of China (English)

    WANG Zhao; ZHU Sheng-cheng; LI Bing; TAN Yu-shan

    2004-01-01

    A whole-field 3D surface measurement system for semiconductor wafer inspection is described.The system consists of an optical fiber plate,which can split the light beam into N2 subbeams to realize the whole-field inspection.A special prism is used to separate the illumination light and signal light.This setup is characterized by high precision,high speed and simple structure.

  6. 3D prostate MR-TRUS non-rigid registration using dual optimization with volume-preserving constraint

    Science.gov (United States)

    Qiu, Wu; Yuan, Jing; Fenster, Aaron

    2016-03-01

    We introduce an efficient and novel convex optimization-based approach to the challenging non-rigid registration of 3D prostate magnetic resonance (MR) and transrectal ultrasound (TRUS) images, which incorporates a new volume preserving constraint to essentially improve the accuracy of targeting suspicious regions during the 3D TRUS guided prostate biopsy. Especially, we propose a fast sequential convex optimization scheme to efficiently minimize the employed highly nonlinear image fidelity function using the robust multi-channel modality independent neighborhood descriptor (MIND) across the two modalities of MR and TRUS. The registration accuracy was evaluated using 10 patient images by calculating the target registration error (TRE) using manually identified corresponding intrinsic fiducials in the whole prostate gland. We also compared the MR and TRUS manually segmented prostate surfaces in the registered images in terms of the Dice similarity coefficient (DSC), mean absolute surface distance (MAD), and maximum absolute surface distance (MAXD). Experimental results showed that the proposed method with the introduced volume-preserving prior significantly improves the registration accuracy comparing to the method without the volume-preserving constraint, by yielding an overall mean TRE of 2:0+/-0:7 mm, and an average DSC of 86:5+/-3:5%, MAD of 1:4+/-0:6 mm and MAXD of 6:5+/-3:5 mm.

  7. Semiautomatic registration of 3D transabdominal ultrasound images for patient repositioning during postprostatectomy radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Presles, Benoît, E-mail: benoit.presles@creatis.insa-lyon.fr; Rit, Simon; Sarrut, David [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Lyon F-69621, France and Léon Bérard Cancer Center, Université de Lyon, Lyon F-69373 (France); Fargier-Voiron, Marie; Liebgott, Hervé [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, Lyon F-69621 (France); Biston, Marie-Claude; Munoz, Alexandre; Pommier, Pascal [Léon Bérard Cancer Center, Université de Lyon, Lyon F-69373 (France); Lynch, Rod [The Andrew Love Cancer Centre, University Hospital Geelong, Geelong 3220 (Australia)

    2014-12-15

    Purpose: The aim of the present work is to propose and evaluate registration algorithms of three-dimensional (3D) transabdominal (TA) ultrasound (US) images to setup postprostatectomy patients during radiation therapy. Methods: Three registration methods have been developed and evaluated to register a reference 3D-TA-US image acquired during the planning CT session and a 3D-TA-US image acquired before each treatment session. The first method (method A) uses only gray value information, whereas the second one (method B) uses only gradient information. The third one (method C) combines both sets of information. All methods restrict the comparison to a region of interest computed from the dilated reference positioning volume drawn on the reference image and use mutual information as a similarity measure. The considered geometric transformations are translations and have been optimized by using the adaptive stochastic gradient descent algorithm. Validation has been carried out using manual registration by three operators of the same set of image pairs as the algorithms. Sixty-two treatment US images of seven patients irradiated after a prostatectomy have been registered to their corresponding reference US image. The reference registration has been defined as the average of the manual registration values. Registration error has been calculated by subtracting the reference registration from the algorithm result. For each session, the method has been considered a failure if the registration error was above both the interoperator variability of the session and a global threshold of 3.0 mm. Results: All proposed registration algorithms have no systematic bias. Method B leads to the best results with mean errors of −0.6, 0.7, and −0.2 mm in left–right (LR), superior–inferior (SI), and anterior–posterior (AP) directions, respectively. With this method, the standard deviations of the mean error are of 1.7, 2.4, and 2.6 mm in LR, SI, and AP directions, respectively

  8. Contributions in compression of 3D medical images and 2D images; Contributions en compression d'images medicales 3D et d'images naturelles 2D

    Energy Technology Data Exchange (ETDEWEB)

    Gaudeau, Y

    2006-12-15

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  9. How to get spatial resolution inside probe volumes of commercial 3D LDA systems

    Energy Technology Data Exchange (ETDEWEB)

    Strunck, V.; Sodomann, T.; Mueller, H.; Dopheide, D. [Section of Fluid Flow Measuring Techniques, Physikalisch-Technische Bundesanstalt, Bundesallee 100, 38116, Braunschweig (Germany)

    2004-01-01

    In laser Doppler anemometry (LDA) it is often the aim to determine the velocity profile for a given fluid flow. The spatial resolution of such velocity profiles is limited in principal by the size of the probe volume. The method of using time of flight data from two probe volumes allows improvements of the spatial resolution by at least one order of magnitude and measurements of small-scale velocity profiles inside the measuring volume along the optical axis of commercial available 3D anemometers without moving the probe. No change of the optical set-up is necessary. An increased spatial resolution helps to acquire more precise data in areas where the flow velocity changes rapidly as shown in the vicinity of the stagnation point of a cuboid. In the overlapping region of three measuring volumes a spatially resolved 3D velocity vector profile is obtained in the direction of the optical axis in near plane flow conditions. In plane laminar flows the probe volume is extended by a few millimetres. The limitation of the method to a plane flow is that it would require a two-component LDA in a very special off-axis arrangement, but this arrangement is available in most commercial 3D systems. (orig.)

  10. 3-D segmentation of retinal blood vessels in spectral-domain OCT volumes of the optic nerve head

    Science.gov (United States)

    Lee, Kyungmoo; Abràmoff, Michael D.; Niemeijer, Meindert; Garvin, Mona K.; Sonka, Milan

    2010-03-01

    Segmentation of retinal blood vessels can provide important information for detecting and tracking retinal vascular diseases including diabetic retinopathy, arterial hypertension, arteriosclerosis and retinopathy of prematurity (ROP). Many studies on 2-D segmentation of retinal blood vessels from a variety of medical images have been performed. However, 3-D segmentation of retinal blood vessels from spectral-domain optical coherence tomography (OCT) volumes, which is capable of providing geometrically accurate vessel models, to the best of our knowledge, has not been previously studied. The purpose of this study is to develop and evaluate a method that can automatically detect 3-D retinal blood vessels from spectral-domain OCT scans centered on the optic nerve head (ONH). The proposed method utilized a fast multiscale 3-D graph search to segment retinal surfaces as well as a triangular mesh-based 3-D graph search to detect retinal blood vessels. An experiment on 30 ONH-centered OCT scans (15 right eye scans and 15 left eye scans) from 15 subjects was performed, and the mean unsigned error in 3-D of the computer segmentations compared with the independent standard obtained from a retinal specialist was 3.4 +/- 2.5 voxels (0.10 +/- 0.07 mm).

  11. 3D synthetic aperture imaging using a virtual source element in the elevation plane

    DEFF Research Database (Denmark)

    Nikolov, Svetoslav; Jensen, Jørgen Arendt

    2000-01-01

    The conventional scanning techniques are not directly extendable for 3D real-time imaging because of the time necessary to acquire one volume. Using a linear array and synthetic transmit aperture, the volume can be scanned plane by plane. Up to 1000 planes per second can be scanned for a typical...... dynamic focusing in the elevation plane. A 0.1 mm point scatterer was mounted in an agar block and scanned in a water bath. The transducer is a 64 elements linear array with a pitch of 209 μm. The transducer height is 4 mm in the elevation plane and it is focused at 20 mm giving a F-number of 5. The point...

  12. Processing of MRI images weighted in TOF for blood vessels analysis: 3-D reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez D, J.; Cordova F, T. [Universidad de Guanajuato, Campus Leon, Departamento de Ingenieria Fisica, Loma del Bosque No. 103, Lomas del Campestre, 37150 Leon, Guanajuato (Mexico); Cruz A, I., E-mail: hernandezdj.gto@gmail.com [CONACYT, Centro de Investigacion en Matematicas, A. C., Jalisco s/n, Col. Valenciana, 36000 Guanajuato, Gto. (Mexico)

    2015-10-15

    This paper presents a novel presents an approach based on differences of intensities for the identification of vascular structures in medical images from MRI studies of type time of flight method (TOF). The plating method hypothesis gave high intensities belonging to the vascular system image type TOF can be segmented by thresholding of the histogram. The enhanced vascular structures is performed using the filter Vesselness, upon completion of a decision based on fuzzy thresholding minimizes error in the selection of vascular structures. It will give a brief introduction to the vascular system problems and how the images have helped diagnosis, is summarized the physical history of the different imaging modalities and the evolution of digital images with computers. Segmentation and 3-D reconstruction became image type time of flight; these images are typically used in medical diagnosis of cerebrovascular diseases. The proposed method has less error in segmentation and reconstruction of volumes related to the vascular system, clear images and less noise compared with edge detection methods. (Author)

  13. Prostate and seminal vesicle volume based consideration of prostate cancer patients for treatment with 3D-conformal or intensity-modulated radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Reddy, Nandanuri M. S.; Nori, Dattatreyudu; Chang, Hyesook; Lange, Christopher S.; Ravi, Akkamma [Department of Radiation Oncology, New York Hospital Queens, Flushing, New York 11355 (United States); Department of Radiation Oncology, State University of New York Downstate Medical Center, Brooklyn, New York 11203 (United States); Department of Radiation Oncology, New York Hospital Queens, Flushing, New York 11355 (United States)

    2010-07-15

    Purpose: The purpose of this article was to determine the suitability of the prostate and seminal vesicle volumes as factors to consider patients for treatment with image-guided 3D-conformal radiation therapy (3D-CRT) or intensity-modulated radiation therapy (IMRT), using common dosimetry parameters as comparison tools. Methods: Dosimetry of 3D and IMRT plans for 48 patients was compared. Volumes of prostate, SV, rectum, and bladder, and prescriptions were the same for both plans. For both 3D and IMRT plans, expansion margins to prostate+SV (CTV) and prostate were 0.5 cm posterior and superior and 1 cm in other dimensions to create PTV and CDPTV, respectively. Six-field 3D plans were prepared retrospectively. For 3D plans, an additional 0.5 cm margin was added to PTV and CDPTV. Prescription for both 3D and IMRT plans was the same: 45 Gy to CTV followed by a 36 Gy boost to prostate. Dosimetry parameters common to 3D and IMRT plans were used for comparison: Mean doses to prostate, CDPTV, SV, rectum, bladder, and femurs; percent volume of rectum and bladder receiving 30 (V30), 50 (V50), and 70 Gy (V70), dose to 30% of rectum and bladder, minimum and maximum point dose to CDPTV, and prescription dose covering 95% of CDPTV (D95). Results: When the data for all patients were combined, mean dose to prostate and CDPTV was higher with 3D than IMRT plans (P<0.01). Mean D95 to CDPTV was the same for 3D and IMRT plans (P>0.2). On average, among all cases, the minimum point dose was less for 3D-CRT plans and the maximum point dose was greater for 3D-CRT than for IMRT (P<0.01). Mean dose to 30% rectum with 3D and IMRT plans was comparable (P>0.1). V30 was less (P<0.01), V50 was the same (P>0.2), and V70 was more (P<0.01) for rectum with 3D than IMRT plans. Mean dose to bladder was less with 3D than IMRT plans (P<0.01). V30 for bladder with 3D plans was less than that of IMRT plans (P<0.01). V50 and V70 for 3D plans were the same for 3D and IMRT plans (P>0.2). Mean dose to femurs

  14. In vivo validation of cardiac output assessment in non-standard 3D echocardiographic images

    Science.gov (United States)

    Nillesen, M. M.; Lopata, R. G. P.; de Boode, W. P.; Gerrits, I. H.; Huisman, H. J.; Thijssen, J. M.; Kapusta, L.; de Korte, C. L.

    2009-04-01

    Automatic segmentation of the endocardial surface in three-dimensional (3D) echocardiographic images is an important tool to assess left ventricular (LV) geometry and cardiac output (CO). The presence of speckle noise as well as the nonisotropic characteristics of the myocardium impose strong demands on the segmentation algorithm. In the analysis of normal heart geometries of standardized (apical) views, it is advantageous to incorporate a priori knowledge about the shape and appearance of the heart. In contrast, when analyzing abnormal heart geometries, for example in children with congenital malformations, this a priori knowledge about the shape and anatomy of the LV might induce erroneous segmentation results. This study describes a fully automated segmentation method for the analysis of non-standard echocardiographic images, without making strong assumptions on the shape and appearance of the heart. The method was validated in vivo in a piglet model. Real-time 3D echocardiographic image sequences of five piglets were acquired in radiofrequency (rf) format. These ECG-gated full volume images were acquired intra-operatively in a non-standard view. Cardiac blood flow was measured simultaneously by an ultrasound transit time flow probe positioned around the common pulmonary artery. Three-dimensional adaptive filtering using the characteristics of speckle was performed on the demodulated rf data to reduce the influence of speckle noise and to optimize the distinction between blood and myocardium. A gradient-based 3D deformable simplex mesh was then used to segment the endocardial surface. A gradient and a speed force were included as external forces of the model. To balance data fitting and mesh regularity, one fixed set of weighting parameters of internal, gradient and speed forces was used for all data sets. End-diastolic and end-systolic volumes were computed from the segmented endocardial surface. The cardiac output derived from this automatic segmentation was

  15. In vivo validation of cardiac output assessment in non-standard 3D echocardiographic images

    Energy Technology Data Exchange (ETDEWEB)

    Nillesen, M M; Lopata, R G P; Gerrits, I H; Thijssen, J M; De Korte, C L [Clinical Physics Laboratory-833, Department of Pediatrics, Radboud University Nijmegen Medical Centre, Nijmegen (Netherlands); De Boode, W P [Neonatology, Department of Pediatrics, Radboud University Nijmegen Medical Centre, Nijmegen (Netherlands); Huisman, H J [Department of Radiology, Radboud University Nijmegen Medical Centre, Nijmegen (Netherlands); Kapusta, L [Pediatric Cardiology, Department of Pediatrics, Radboud University Nijmegen Medical Centre, Nijmegen (Netherlands)], E-mail: m.m.nillesen@cukz.umcn.nl

    2009-04-07

    Automatic segmentation of the endocardial surface in three-dimensional (3D) echocardiographic images is an important tool to assess left ventricular (LV) geometry and cardiac output (CO). The presence of speckle noise as well as the nonisotropic characteristics of the myocardium impose strong demands on the segmentation algorithm. In the analysis of normal heart geometries of standardized (apical) views, it is advantageous to incorporate a priori knowledge about the shape and appearance of the heart. In contrast, when analyzing abnormal heart geometries, for example in children with congenital malformations, this a priori knowledge about the shape and anatomy of the LV might induce erroneous segmentation results. This study describes a fully automated segmentation method for the analysis of non-standard echocardiographic images, without making strong assumptions on the shape and appearance of the heart. The method was validated in vivo in a piglet model. Real-time 3D echocardiographic image sequences of five piglets were acquired in radiofrequency (rf) format. These ECG-gated full volume images were acquired intra-operatively in a non-standard view. Cardiac blood flow was measured simultaneously by an ultrasound transit time flow probe positioned around the common pulmonary artery. Three-dimensional adaptive filtering using the characteristics of speckle was performed on the demodulated rf data to reduce the influence of speckle noise and to optimize the distinction between blood and myocardium. A gradient-based 3D deformable simplex mesh was then used to segment the endocardial surface. A gradient and a speed force were included as external forces of the model. To balance data fitting and mesh regularity, one fixed set of weighting parameters of internal, gradient and speed forces was used for all data sets. End-diastolic and end-systolic volumes were computed from the segmented endocardial surface. The cardiac output derived from this automatic segmentation was

  16. Quality Prediction of Asymmetrically Distorted Stereoscopic 3D Images.

    Science.gov (United States)

    Wang, Jiheng; Rehman, Abdul; Zeng, Kai; Wang, Shiqi; Wang, Zhou

    2015-11-01

    Objective quality assessment of distorted stereoscopic images is a challenging problem, especially when the distortions in the left and right views are asymmetric. Existing studies suggest that simply averaging the quality of the left and right views well predicts the quality of symmetrically distorted stereoscopic images, but generates substantial prediction bias when applied to asymmetrically distorted stereoscopic images. In this paper, we first build a database that contains both single-view and symmetrically and asymmetrically distorted stereoscopic images. We then carry out a subjective test, where we find that the quality prediction bias of the asymmetrically distorted images could lean toward opposite directions (overestimate or underestimate), depending on the distortion types and levels. Our subjective test also suggests that eye dominance effect does not have strong impact on the visual quality decisions of stereoscopic images. Furthermore, we develop an information content and divisive normalization-based pooling scheme that improves upon structural similarity in estimating the quality of single-view images. Finally, we propose a binocular rivalry-inspired multi-scale model to predict the quality of stereoscopic images from that of the single-view images. Our results show that the proposed model, without explicitly identifying image distortion types, successfully eliminates the prediction bias, leading to significantly improved quality prediction of the stereoscopic images.

  17. Reconstruction of 3d Digital Image of Weepingforsythia Pollen

    Science.gov (United States)

    Liu, Dongwu; Chen, Zhiwei; Xu, Hongzhi; Liu, Wenqi; Wang, Lina

    Confocal microscopy, which is a major advance upon normal light microscopy, has been used in a number of scientific fields. By confocal microscopy techniques, cells and tissues can be visualized deeply, and three-dimensional images created. Compared with conventional microscopes, confocal microscope improves the resolution of images by eliminating out-of-focus light. Moreover, confocal microscope has a higher level of sensitivity due to highly sensitive light detectors and the ability to accumulate images captured over time. In present studies, a series of Weeping Forsythia pollen digital images (35 images in total) were acquired with confocal microscope, and the three-dimensional digital image of the pollen reconstructed with confocal microscope. Our results indicate that it's a very easy job to analysis threedimensional digital image of the pollen with confocal microscope and the probe Acridine orange (AO).

  18. 3D Curvelet-Based Segmentation and Quantification of Drusen in Optical Coherence Tomography Images

    Directory of Open Access Journals (Sweden)

    M. Esmaeili

    2017-01-01

    Full Text Available Spectral-Domain Optical Coherence Tomography (SD-OCT is a widely used interferometric diagnostic technique in ophthalmology that provides novel in vivo information of depth-resolved inner and outer retinal structures. This imaging modality can assist clinicians in monitoring the progression of Age-related Macular Degeneration (AMD by providing high-resolution visualization of drusen. Quantitative tools for assessing drusen volume that are indicative of AMD progression may lead to appropriate metrics for selecting treatment protocols. To address this need, a fully automated algorithm was developed to segment drusen area and volume from SD-OCT images. The proposed algorithm consists of three parts: (1 preprocessing, which includes creating binary mask and removing possible highly reflective posterior hyaloid that is used in accurate detection of inner segment/outer segment (IS/OS junction layer and Bruch’s membrane (BM retinal layers; (2 coarse segmentation, in which 3D curvelet transform and graph theory are employed to get the possible candidate drusenoid regions; (3 fine segmentation, in which morphological operators are used to remove falsely extracted elongated structures and get the refined segmentation results. The proposed method was evaluated in 20 publically available volumetric scans acquired by using Bioptigen spectral-domain ophthalmic imaging system. The average true positive and false positive volume fractions (TPVF and FPVF for the segmentation of drusenoid regions were found to be 89.15% ± 3.76 and 0.17% ± .18%, respectively.

  19. Infrared imaging of the polymer 3D-printing process

    Science.gov (United States)

    Dinwiddie, Ralph B.; Kunc, Vlastimil; Lindal, John M.; Post, Brian; Smith, Rachel J.; Love, Lonnie; Duty, Chad E.

    2014-05-01

    Both mid-wave and long-wave IR cameras are used to measure various temperature profiles in thermoplastic parts as they are printed. Two significantly different 3D-printers are used in this study. The first is a small scale commercially available Solidoodle 3 printer, which prints parts with layer thicknesses on the order of 125μm. The second printer used is a "Big Area Additive Manufacturing" (BAAM) 3D-printer developed at Oak Ridge National Laboratory. The BAAM prints parts with a layer thicknesses of 4.06 mm. Of particular interest is the temperature of the previously deposited layer as the new hot layer is about to be extruded onto it. The two layers are expected have a stronger bond if the temperature of the substrate layer is above the glass transition temperature. This paper describes the measurement technique and results for a study of temperature decay and substrate layer temperature for ABS thermoplastic with and without the addition of chopped carbon fibers.

  20. Breast Density Analysis with Automated Whole-Breast Ultrasound: Comparison with 3-D Magnetic Resonance Imaging.

    Science.gov (United States)

    Chen, Jeon-Hor; Lee, Yan-Wei; Chan, Si-Wa; Yeh, Dah-Cherng; Chang, Ruey-Feng

    2016-05-01

    In this study, a semi-automatic breast segmentation method was proposed on the basis of the rib shadow to extract breast regions from 3-D automated whole-breast ultrasound (ABUS) images. The density results were correlated with breast density values acquired with 3-D magnetic resonance imaging (MRI). MRI images of 46 breasts were collected from 23 women without a history of breast disease. Each subject also underwent ABUS. We used Otsu's thresholding method on ABUS images to obtain local rib shadow information, which was combined with the global rib shadow information (extracted from all slice projections) and integrated with the anatomy's breast tissue structure to determine the chest wall line. The fuzzy C-means classifier was used to extract the fibroglandular tissues from the acquired images. Whole-breast volume (WBV) and breast percentage density (BPD) were calculated in both modalities. Linear regression was used to compute the correlation of density results between the two modalities. The consistency of density measurement was also analyzed on the basis of intra- and inter-operator variation. There was a high correlation of density results between MRI and ABUS (R(2) = 0.798 for WBV, R(2) = 0.825 for PBD). The mean WBV from ABUS images was slightly smaller than the mean WBV from MR images (MRI: 342.24 ± 128.08 cm(3), ABUS: 325.47 ± 136.16 cm(3), p MRI: 24.71 ± 15.16%, ABUS: 28.90 ± 17.73%, p breast density measurement variation between the two modalities. Our results revealed a high correlation in WBV and BPD between MRI and ABUS. Our study suggests that ABUS provides breast density information useful in the assessment of breast health.

  1. 3D iterative helical targeted CT. Application to contrast-enhanced vascular imaging

    Energy Technology Data Exchange (ETDEWEB)

    Gendron, David; Goussard, Yves; Hamelin, Benoit [Ecole Polytechnique de Montreal, Montreal, QC (Canada). Inst. de Genie Biomedical; Dussault, Jean-Pierre [Sherbrooke Univ., Sherbrooke, QC (Canada). Dept. d' Informatique; Beaudoin, Gilles; Cloutier, Guy; Chartrand-Lefebvre, Carl; Hadjadj, Sofiane; Soulez, Gilles [Montreal Univ., Hopital Notre-Dame, Montreal, QC (Canada). Centre de Recherche du Centre Hospitalier

    2011-07-01

    We present the implementation of a iterative reconstruction algorithm for 3D helical computed tomography. The main difficulties of helical CT reconstruction are the large memory footprint of the tools and data involved, as well as the very long runtime of the iterative methods. The proposed solution hinges on the following three features: (1) a multiple-ray-driven projection operator with a parsimonious representation; (2) a targeted reconstruction framework that restricts the iterative reconstruction effort to a region of interest within the imaged volume; (3) the choice of a fast convergent solver for the nonlinear reconstruction problem. Results on clinical-size data show significant improvement in image quality over the default scanner reconstruction and an acceptable computation cost. (orig.)

  2. Web-based volume slicer for 3D electron-microscopy data from EMDB.

    Science.gov (United States)

    Salavert-Torres, José; Iudin, Andrii; Lagerstedt, Ingvar; Sanz-García, Eduardo; Kleywegt, Gerard J; Patwardhan, Ardan

    2016-05-01

    We describe the functionality and design of the Volume slicer - a web-based slice viewer for EMDB entries. This tool uniquely provides the facility to view slices from 3D EM reconstructions along the three orthogonal axes and to rapidly switch between them and navigate through the volume. We have employed multiple rounds of user-experience testing with members of the EM community to ensure that the interface is easy and intuitive to use and the information provided is relevant. The impetus to develop the Volume slicer has been calls from the EM community to provide web-based interactive visualisation of 2D slice data. This would be useful for quick initial checks of the quality of a reconstruction. Again in response to calls from the community, we plan to further develop the Volume slicer into a fully-fledged Volume browser that provides integrated visualisation of EMDB and PDB entries from the molecular to the cellular scale.

  3. 3-D Target Location from Stereoscopic SAR Images

    Energy Technology Data Exchange (ETDEWEB)

    DOERRY,ARMIN W.

    1999-10-01

    SAR range-Doppler images are inherently 2-dimensional. Targets with a height offset lay over onto offset range and azimuth locations. Just which image locations are laid upon depends on the imaging geometry, including depression angle, squint angle, and target bearing. This is the well known layover phenomenon. Images formed with different aperture geometries will exhibit different layover characteristics. These differences can be exploited to ascertain target height information, in a stereoscopic manner. Depending on the imaging geometries, height accuracy can be on the order of horizontal position accuracies, thereby rivaling the best IFSAR capabilities in fine resolution SAR images. All that is required for this to work are two distinct passes with suitably different geometries from any plain old SAR.

  4. Single minimum incision endoscopic radical nephrectomy for renal tumors with preoperative virtual navigation using 3D-CT volume-rendering

    Directory of Open Access Journals (Sweden)

    Shioyama Yasukazu

    2010-04-01

    Full Text Available Abstract Background Single minimum incision endoscopic surgery (MIES involves the use of a flexible high-definition laparoscope to facilitate open surgery. We reviewed our method of radical nephrectomy for renal tumors, which is single MIES combined with preoperative virtual surgery employing three-dimensional CT images reconstructed by the volume rendering method (3D-CT images in order to safely and appropriately approach the renal hilar vessels. We also assessed the usefulness of 3D-CT images. Methods Radical nephrectomy was done by single MIES via the translumbar approach in 80 consecutive patients. We performed the initial 20 MIES nephrectomies without preoperative 3D-CT images and the subsequent 60 MIES nephrectomies with preoperative 3D-CT images for evaluation of the renal hilar vessels and the relation of each tumor to the surrounding structures. On the basis of the 3D information, preoperative virtual surgery was performed with a computer. Results Single MIES nephrectomy was successful in all patients. In the 60 patients who underwent 3D-CT, the number of renal arteries and veins corresponded exactly with the preoperative 3D-CT data (100% sensitivity and 100% specificity. These 60 nephrectomies were completed with a shorter operating time and smaller blood loss than the initial 20 nephrectomies. Conclusions Single MIES radical nephrectomy combined with 3D-CT and virtual surgery achieved a shorter operating time and less blood loss, possibly due to safer and easier handling of the renal hilar vessels.

  5. 3D Image Sensor based on Parallax Motion

    Directory of Open Access Journals (Sweden)

    Barna Reskó

    2007-12-01

    Full Text Available For humans and visual animals vision it is the primary and the most sophisticatedperceptual modality to get information about the surrounding world. Depth perception is apart of vision allowing to accurately determine the distance to an object which makes it animportant visual task. Humans have two eyes with overlapping visual fields that enablestereo vision and thus space perception. Some birds however do not have overlappingvisual fields, and compensate this lask by moving their heads, which in turn makes spaceperception possible using the motion parallax as a visual cue. This paper presents asolution using an opto-mechanical filter that was inspired by the way birds observe theirenvironment. The filtering is done using two different approaches:using motion blur duringmotion parallax, and using the optical flow algorithm. The two methods have differentadvantages and drawbacks, which will be discussed in the paper. The proposed system canbe used in robotics for 3D space perception.

  6. The Mathematical Foundations of 3D Compton Scatter Emission Imaging

    Directory of Open Access Journals (Sweden)

    T. T. Truong

    2007-01-01

    Full Text Available The mathematical principles of tomographic imaging using detected (unscattered X- or gamma-rays are based on the two-dimensional Radon transform and many of its variants. In this paper, we show that two new generalizations, called conical Radon transforms, are related to three-dimensional imaging processes based on detected Compton scattered radiation. The first class of conical Radon transform has been introduced recently to support imaging principles of collimated detector systems. The second class is new and is closely related to the Compton camera imaging principles and invertible under special conditions. As they are poised to play a major role in future designs of biomedical imaging systems, we present an account of their most important properties which may be relevant for active researchers in the field.

  7. GPU accelerated generation of digitally reconstructed radiographs for 2-D/3-D image registration.

    Science.gov (United States)

    Dorgham, Osama M; Laycock, Stephen D; Fisher, Mark H

    2012-09-01

    Recent advances in programming languages for graphics processing units (GPUs) provide developers with a convenient way of implementing applications which can be executed on the CPU and GPU interchangeably. GPUs are becoming relatively cheap, powerful, and widely available hardware components, which can be used to perform intensive calculations. The last decade of hardware performance developments shows that GPU-based computation is progressing significantly faster than CPU-based computation, particularly if one considers the execution of highly parallelisable algorithms. Future predictions illustrate that this trend is likely to continue. In this paper, we introduce a way of accelerating 2-D/3-D image registration by developing a hybrid system which executes on the CPU and utilizes the GPU for parallelizing the generation of digitally reconstructed radiographs (DRRs). Based on the advancements of the GPU over the CPU, it is timely to exploit the benefits of many-core GPU technology by developing algorithms for DRR generation. Although some previous work has investigated the rendering of DRRs using the GPU, this paper investigates approximations which reduce the computational overhead while still maintaining a quality consistent with that needed for 2-D/3-D registration with sufficient accuracy to be clinically acceptable in certain applications of radiation oncology. Furthermore, by comparing implementations of 2-D/3-D registration on the CPU and GPU, we investigate current performance and propose an optimal framework for PC implementations addressing the rigid registration problem. Using this framework, we are able to render DRR images from a 256×256×133 CT volume in ~24 ms using an NVidia GeForce 8800 GTX and in ~2 ms using NVidia GeForce GTX 580. In addition to applications requiring fast automatic patient setup, these levels of performance suggest image-guided radiation therapy at video frame rates is technically feasible using relatively low cost PC

  8. Testing & Validating: 3D Seismic Travel Time Tomography (Detailed Shallow Subsurface Imaging)

    Science.gov (United States)

    Marti, David; Marzan, Ignacio; Alvarez-Marron, Joaquina; Carbonell, Ramon

    2016-04-01

    A detailed full 3 dimensional P wave seismic velocity model was constrained by a high-resolution seismic tomography experiment. A regular and dense grid of shots and receivers was use to image a 500x500x200 m volume of the shallow subsurface. 10 GEODE's resulting in a 240 channels recording system and a 250 kg weight drop were used for the acquisition. The recording geometry consisted in 10x20m geophone grid spacing, and a 20x20 m stagered source spacing. A total of 1200 receivers and 676 source points. The study area is located within the Iberian Meseta, in Villar de Cañas (Cuenca, Spain). The lithological/geological target consisted in a Neogen sedimentary sequence formed from bottom to top by a transition from gyspum to silstones. The main objectives consisted in resolving the underground structure: contacts/discontinuities; constrain the 3D geometry of the lithology (possible cavities, faults/fractures). These targets were achieved by mapping the 3D distribution of the physical properties (P-wave velocity). The regularly space dense acquisition grid forced to acquire the survey in different stages and with a variety of weather conditions. Therefore, a careful quality control was required. More than a half million first arrivals were inverted to provide a 3D Vp velocity model that reached depths of 120 m in the areas with the highest ray coverage. An extended borehole campaign, that included borehole geophysical measurements in some wells provided unique tight constraints on the lithology an a validation scheme for the tomographic results. The final image reveals a laterally variable structure consisting of four different lithological units. In this methodological validation test travel-time tomography features a high capacity of imaging in detail the lithological contrasts for complex structures located at very shallow depths.

  9. Review of three-dimensional (3D) surface imaging for oncoplastic, reconstructive and aesthetic breast surgery.

    Science.gov (United States)

    O'Connell, Rachel L; Stevens, Roger J G; Harris, Paul A; Rusby, Jennifer E

    2015-08-01

    Three-dimensional surface imaging (3D-SI) is being marketed as a tool in aesthetic breast surgery. It has recently also been studied in the objective evaluation of cosmetic outcome of oncological procedures. The aim of this review is to summarise the use of 3D-SI in oncoplastic, reconstructive and aesthetic breast surgery. An extensive literature review was undertaken to identify published studies. Two reviewers independently screened all abstracts and selected relevant articles using specific inclusion criteria. Seventy two articles relating to 3D-SI for breast surgery were identified. These covered endpoints such as image acquisition, calculations and data obtainable, comparison of 3D and 2D imaging and clinical research applications of 3D-SI. The literature provides a favourable view of 3D-SI. However, evidence of its superiority over current methods of clinical decision making, surgical planning, communication and evaluation of outcome is required before it can be accepted into mainstream practice.

  10. GOTHIC CHURCHES IN PARIS ST GERVAIS ET ST PROTAIS IMAGE MATCHING 3D RECONSTRUCTION TO UNDERSTAND THE VAULTS SYSTEM GEOMETRY

    Directory of Open Access Journals (Sweden)

    M. Capone

    2015-02-01

    benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  11. Clinical Study of 3D Imaging and 3D Printing Technique for Patient-Specific Instrumentation in Total Knee Arthroplasty.

    Science.gov (United States)

    Qiu, Bing; Liu, Fei; Tang, Bensen; Deng, Biyong; Liu, Fang; Zhu, Weimin; Zhen, Dong; Xue, Mingyuan; Zhang, Mingjiao

    2017-01-25

    Patient-specific instrumentation (PSI) was designed to improve the accuracy of preoperative planning and postoperative prosthesis positioning in total knee arthroplasty (TKA). However, better understanding needs to be achieved due to the subtle nature of the PSI systems. In this study, 3D printing technique based on the image data of computed tomography (CT) has been utilized for optimal controlling of the surgical parameters. Two groups of TKA cases have been randomly selected as PSI group and control group with no significant difference of age and sex (p > 0.05). The PSI group is treated with 3D printed cutting guides whereas the control group is treated with conventional instrumentation (CI). By evaluating the proximal osteotomy amount, distal osteotomy amount, valgus angle, external rotation angle, and tibial posterior slope angle of patients, it can be found that the preoperative quantitative assessment and intraoperative changes can be controlled with PSI whereas CI is relied on experience. In terms of postoperative parameters, such as hip-knee-ankle (HKA), frontal femoral component (FFC), frontal tibial component (FTC), and lateral tibial component (LTC) angles, there is a significant improvement in achieving the desired implant position (p implantation compared against control method, which indicates potential for optimal HKA, FFC, and FTC angles.

  12. Evaluation of stereoscopic 3D displays for image analysis tasks

    Science.gov (United States)

    Peinsipp-Byma, E.; Rehfeld, N.; Eck, R.

    2009-02-01

    In many application domains the analysis of aerial or satellite images plays an important role. The use of stereoscopic display technologies can enhance the image analyst's ability to detect or to identify certain objects of interest, which results in a higher performance. Changing image acquisition from analog to digital techniques entailed the change of stereoscopic visualisation techniques. Recently different kinds of digital stereoscopic display techniques with affordable prices have appeared on the market. At Fraunhofer IITB usability tests were carried out to find out (1) with which kind of these commercially available stereoscopic display techniques image analysts achieve the best performance and (2) which of these techniques achieve a high acceptance. First, image analysts were interviewed to define typical image analysis tasks which were expected to be solved with a higher performance using stereoscopic display techniques. Next, observer experiments were carried out whereby image analysts had to solve defined tasks with different visualization techniques. Based on the experimental results (performance parameters and qualitative subjective evaluations of the used display techniques) two of the examined stereoscopic display technologies were found to be very good and appropriate.

  13. Study of filled dolines by using 3D stereo image processing and electrical resistivity imaging

    Directory of Open Access Journals (Sweden)

    Mateja Breg Valjavec

    2014-01-01

    Full Text Available This article deals with doline degradation due to uncontrolled waste dumping in the past in the Logatec Polje in Slovenia. It introduces a concept for determining 3D geometric characteristics (shape, depth, radius, area, and volume of formerly concave landforms (i.e., recently filled dolines by using a combination of two methods: (1 photogrammetric stereo processing of archival aerial photographs and (2 electrical resistivity imaging (ERI. To represent, visualize, and study the characteristics of the former surface morphology (i.e., the dolines before they were filled, a digital terrain model (DTM for 1972 (DTM1972 was made using digital photogrammetry processing of five sequential archival aerial photographs (1972, © GURS. DTM1972 was visually and quantitatively compared with the DTM5 of the recent surface morfology (DTM5, © GURS, 2006 in order to define areas of manmade terrain differences. In general, a circular area with a higher terrain difference is an indicator of a filled doline. The calculated terrain differences also indicate the thickness of buried waste material. Three case-study dolines were selected for 3D geometric analysis and tested in the field using ERI. ERI was used to determine the genetic type of the original doline, to confirm that the buried material in the doline is actually waste, and to ascertain opportunities for further study of water pollution due to waste leakage. Based on a comparison among the ERI sections obtained using various electrode arrays, it was concluded that the basins are actually past concave landforms (i.e., dolines filled with mixed waste material having the lowest resistivity value (bellow 100 ohm-m, which differs measurably from the surrounding natural materials. The resistivity of hard stacked limestone is higher (above 1,000 ohm-m than resistivity of cracked carbonate rocks with cracks filled with loamy clay sediments while in loamy alluvial sediment resistivity falls below 150 ohm

  14. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes.

    Science.gov (United States)

    Zhong, Zichun; Guo, Xiaohu; Cai, Yiqi; Yang, Yin; Wang, Jing; Jia, Xun; Mao, Weihua

    2016-01-01

    By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT) scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs) are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs) of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

  15. 3D-2D Deformable Image Registration Using Feature-Based Nonuniform Meshes

    Directory of Open Access Journals (Sweden)

    Zichun Zhong

    2016-01-01

    Full Text Available By using prior information of planning CT images and feature-based nonuniform meshes, this paper demonstrates that volumetric images can be efficiently registered with a very small portion of 2D projection images of a Cone-Beam Computed Tomography (CBCT scan. After a density field is computed based on the extracted feature edges from planning CT images, nonuniform tetrahedral meshes will be automatically generated to better characterize the image features according to the density field; that is, finer meshes are generated for features. The displacement vector fields (DVFs are specified at the mesh vertices to drive the deformation of original CT images. Digitally reconstructed radiographs (DRRs of the deformed anatomy are generated and compared with corresponding 2D projections. DVFs are optimized to minimize the objective function including differences between DRRs and projections and the regularity. To further accelerate the above 3D-2D registration, a procedure to obtain good initial deformations by deforming the volume surface to match 2D body boundary on projections has been developed. This complete method is evaluated quantitatively by using several digital phantoms and data from head and neck cancer patients. The feature-based nonuniform meshing method leads to better results than either uniform orthogonal grid or uniform tetrahedral meshes.

  16. Building Extraction from DSM Acquired by Airborne 3D Image

    Institute of Scientific and Technical Information of China (English)

    YOU Hongjian; LI Shukai

    2003-01-01

    Segmentation and edge regulation are studied deeply to extract buildings from DSM data produced in this paper. Building segmentation is the first step to extract buildings, and a new segmentation method-adaptive iterative segmentation considering ratio mean square-is proposed to extract the contour of buildings effectively. A sub-image (such as 50× 50 pixels )of the image is processed in sequence,the average gray level and its ratio mean square are calculated first, then threshold of the sub-image is selected by using iterative threshold segmentation. The current pixel is segmented according to the threshold, the aver-age gray level and the ratio mean square of the sub-image. The edge points of the building are grouped according to the azimuth of neighbor points, and then the optimal azimuth of the points that belong to the same group can be calculated by using line interpolation.

  17. Accurately measuring volume of soil samples using low cost Kinect 3D scanner

    Science.gov (United States)

    van der Sterre, Boy-Santhos; Hut, Rolf; van de Giesen, Nick

    2013-04-01

    The 3D scanner of the Kinect game controller can be used to increase the accuracy and efficiency of determining in situ soil moisture content. Soil moisture is one of the principal hydrological variables in both the water and energy interactions between soil and atmosphere. Current in situ measurements of soil moisture either rely on indirect measurements (of electromagnetic constants or heat capacity) or on physically taking a sample and weighing it in a lab. The bottleneck in accurately retrieving soil moisture using samples is the determining of the volume of the sample. Currently this is mostly done by the very time consuming "sand cone method" in which the volume were the sample used to sit is filled with sand. We show that 3D scanner that is part of the 150 game controller extension "Kinect" can be used to make 3D scans before and after taking the sample. The accuracy of this method is tested by scanning forms of known volume. This method is less time consuming and less error-prone than using a sand cone.

  18. Quality assessment of stereoscopic 3D image compression by binocular integration behaviors.

    Science.gov (United States)

    Lin, Yu-Hsun; Wu, Ja-Ling

    2014-04-01

    The objective approaches of 3D image quality assessment play a key role for the development of compression standards and various 3D multimedia applications. The quality assessment of 3D images faces more new challenges, such as asymmetric stereo compression, depth perception, and virtual view synthesis, than its 2D counterparts. In addition, the widely used 2D image quality metrics (e.g., PSNR and SSIM) cannot be directly applied to deal with these newly introduced challenges. This statement can be verified by the low correlation between the computed objective measures and the subjectively measured mean opinion scores (MOSs), when 3D images are the tested targets. In order to meet these newly introduced challenges, in this paper, besides traditional 2D image metrics, the binocular integration behaviors-the binocular combination and the binocular frequency integration, are utilized as the bases for measuring the quality of stereoscopic 3D images. The effectiveness of the proposed metrics is verified by conducting subjective evaluations on publicly available stereoscopic image databases. Experimental results show that significant consistency could be reached between the measured MOS and the proposed metrics, in which the correlation coefficient between them can go up to 0.88. Furthermore, we found that the proposed metrics can also address the quality assessment of the synthesized color-plus-depth 3D images well. Therefore, it is our belief that the binocular integration behaviors are important factors in the development of objective quality assessment for 3D images.

  19. Imaging of discontinuities in nonlinear 3-D seismic inversion

    Energy Technology Data Exchange (ETDEWEB)

    Carrion, P.M.; Cerveny, V. (PPPG/UFBA, Salvador (Brazil))

    1990-09-01

    The authors present a nonlinear approach for reconstruction of discontinuities in geological environment (earth's crust, say). The advantage of the proposed method is that it is not limited to a Born approximation (small angles of propagation and weak scatterers). One can expect significantly better images since larger apertures including wide angle reflection arrivals can be incorporated into the imaging operator. In this paper, they treat only compressional body waves: shear and surface waves are considered as noise.

  20. Real-time auto-stereoscopic visualization of 3D medical images

    Science.gov (United States)

    Portoni, Luisa; Patak, Alexandre; Noirard, Pierre; Grossetie, Jean-Claude; van Berkel, Cees

    2000-04-01

    The work here described regards multi-viewer auto- stereoscopic visualization of 3D models of anatomical structures and organs of the human body. High-quality 3D models of more than 1600 anatomical structures have been reconstructed using the Visualization Toolkit, a freely available C++ class library for 3D graphics and visualization. 2D images used for 3D reconstruction comes from the Visible Human Data Set. Auto-stereoscopic 3D image visualization is obtained using a prototype monitor developed at Philips Research Labs, UK. This special multiview 3D-LCD screen has been connected directly to a SGI workstation, where 3D reconstruction and medical imaging applications are executed. Dedicated software has been developed to implement multiview capability. A number of static or animated contemporary views of the same object can simultaneously be seen on the 3D-LCD screen by several observers, having a real 3D perception of the visualized scene without the use of extra media as dedicated glasses or head-mounted displays. Developed software applications allow real-time interaction with visualized 3D models, didactical animations and movies have been realized as well.

  1. Contactless operating table control based on 3D image processing.

    Science.gov (United States)

    Schröder, Stephan; Loftfield, Nina; Langmann, Benjamin; Frank, Klaus; Reithmeier, Eduard

    2014-01-01

    Interaction with mobile consumer devices leads to a higher acceptance and affinity of persons to natural user interfaces and perceptional interaction possibilities. New interaction modalities become accessible and are capable to improve human machine interaction even in complex and high risk environments, like the operation room. Here, manifold medical disciplines cause a great variety of procedures and thus staff and equipment. One universal challenge is to meet the sterility requirements, for which common contact-afflicted remote interfaces always pose a potential risk causing a hazard for the process. The proposed operating table control system overcomes this process risk and thus improves the system usability significantly. The 3D sensor system, the Microsoft Kinect, captures the motion of the user, allowing a touchless manipulation of an operating table. Three gestures enable the user to select, activate and manipulate all segments of the motorised system in a safe and intuitive way. The gesture dynamics are synchronised with the table movement. In a usability study, 15 participants evaluated the system with a system usability score by Broke of 79. This states a high potential for implementation and acceptance in interventional environments. In the near future, even processes with higher risks could be controlled with the proposed interface, while interfaces become safer and more direct.

  2. Image quality of a cone beam O-arm 3D imaging system

    Science.gov (United States)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  3. 3D Kidney Segmentation from Abdominal Images Using Spatial-Appearance Models

    Science.gov (United States)

    Khalifa, Fahmi; Soliman, Ahmed; Gimel'farb, Georgy

    2017-01-01

    Kidney segmentation is an essential step in developing any noninvasive computer-assisted diagnostic system for renal function assessment. This paper introduces an automated framework for 3D kidney segmentation from dynamic computed tomography (CT) images that integrates discriminative features from the current and prior CT appearances into a random forest classification approach. To account for CT images' inhomogeneities, we employ discriminate features that are extracted from a higher-order spatial model and an adaptive shape model in addition to the first-order CT appearance. To model the interactions between CT data voxels, we employed a higher-order spatial model, which adds the triple and quad clique families to the traditional pairwise clique family. The kidney shape prior model is built using a set of training CT data and is updated during segmentation using not only region labels but also voxels' appearances in neighboring spatial voxel locations. Our framework performance has been evaluated on in vivo dynamic CT data collected from 20 subjects and comprises multiple 3D scans acquired before and after contrast medium administration. Quantitative evaluation between manually and automatically segmented kidney contours using Dice similarity, percentage volume differences, and 95th-percentile bidirectional Hausdorff distances confirms the high accuracy of our approach.

  4. Air-touch interaction system for integral imaging 3D display

    Science.gov (United States)

    Dong, Han Yuan; Xiang, Lee Ming; Lee, Byung Gook

    2016-07-01

    In this paper, we propose an air-touch interaction system for the tabletop type integral imaging 3D display. This system consists of the real 3D image generation system based on integral imaging technique and the interaction device using a real-time finger detection interface. In this system, we used multi-layer B-spline surface approximation to detect the fingertip and gesture easily in less than 10cm height from the screen via input the hand image. The proposed system can be used in effective human computer interaction method for the tabletop type 3D display.

  5. Radar Imaging of Spheres in 3D using MUSIC

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, D H; Berryman, J G

    2003-01-21

    We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between direct and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.

  6. 3D-2D registration of cerebral angiograms: a method and evaluation on clinical images.

    Science.gov (United States)

    Mitrovic, Uroš; Špiclin, Žiga; Likar, Boštjan; Pernuš, Franjo

    2013-08-01

    Endovascular image-guided interventions (EIGI) involve navigation of a catheter through the vasculature followed by application of treatment at the site of anomaly using live 2D projection images for guidance. 3D images acquired prior to EIGI are used to quantify the vascular anomaly and plan the intervention. If fused with the information of live 2D images they can also facilitate navigation and treatment. For this purpose 3D-2D image registration is required. Although several 3D-2D registration methods for EIGI achieve registration accuracy below 1 mm, their clinical application is still limited by insufficient robustness or reliability. In this paper, we propose a 3D-2D registration method based on matching a 3D vasculature model to intensity gradients of live 2D images. To objectively validate 3D-2D registration methods, we acquired a clinical image database of 10 patients undergoing cerebral EIGI and established "gold standard" registrations by aligning fiducial markers in 3D and 2D images. The proposed method had mean registration accuracy below 0.65 mm, which was comparable to tested state-of-the-art methods, and execution time below 1 s. With the highest rate of successful registrations and the highest capture range the proposed method was the most robust and thus a good candidate for application in EIGI.

  7. AN IMAGE-BASED TECHNIQUE FOR 3D BUILDING RECONSTRUCTION USING MULTI-VIEW UAV IMAGES

    Directory of Open Access Journals (Sweden)

    F. Alidoost

    2015-12-01

    Full Text Available Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  8. 3D spectral imaging system for anterior chamber metrology

    Science.gov (United States)

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 μm pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1μm with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  9. Development and comparison of projection and image space 3D nodule insertion techniques

    Science.gov (United States)

    Robins, Marthony; Solomon, Justin; Sahbaee, Pooyan; Samei, Ehsan

    2016-04-01

    This study aimed to develop and compare two methods of inserting computerized virtual lesions into CT datasets. 24 physical (synthetic) nodules of three sizes and four morphologies were inserted into an anthropomorphic chest phantom (LUNGMAN, KYOTO KAGAKU). The phantom was scanned (Somatom Definition Flash, Siemens Healthcare) with and without nodules present, and images were reconstructed with filtered back projection and iterative reconstruction (SAFIRE) at 0.6 mm slice thickness using a standard thoracic CT protocol at multiple dose settings. Virtual 3D CAD models based on the physical nodules were virtually inserted (accounting for the system MTF) into the nodule-free CT data using two techniques. These techniques include projection-based and image-based insertion. Nodule volumes were estimated using a commercial segmentation tool (iNtuition, TeraRecon, Inc.). Differences were tested using paired t-tests and R2 goodness of fit between the virtually and physically inserted nodules. Both insertion techniques resulted in nodule volumes very similar to the real nodules (<3% difference) and in most cases the differences were not statistically significant. Also, R2 values were all <0.97 for both insertion techniques. These data imply that these techniques can confidently be used as a means of inserting virtual nodules in CT datasets. These techniques can be instrumental in building hybrid CT datasets composed of patient images with virtually inserted nodules.

  10. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Ferin, Guillaume

    2012-01-01

    the cystic resolution, which expresses the ability to detect anechoic cysts in a uniform scattering media, at all depths except at Explososcan's focus point. Synthetic aperture reduced the cyst radius, R20dB, at 90mm depth by 48%. Synthetic aperture imaging was shown to reduce the number of transmit channels...

  11. High definition 3D imaging lidar system using CCD

    Science.gov (United States)

    Jo, Sungeun; Kong, Hong Jin; Bang, Hyochoong

    2016-10-01

    In this study we propose and demonstrate a novel technique for measuring distance with high definition three-dimensional imaging. To meet the stringent requirements of various missions, spatial resolution and range precision are important properties for flash LIDAR systems. The proposed LIDAR system employs a polarization modulator and a CCD. When a laser pulse is emitted from the laser, it triggers the polarization modulator. The laser pulse is scattered by the target and is reflected back to the LIDAR system while the polarization modulator is rotating. Its polarization state is a function of time. The laser-return pulse passes through the polarization modulator in a certain polarization state, and the polarization state is calculated using the intensities of the laser pulses measured by the CCD. Because the function of the time and the polarization state is already known, the polarization state can be converted to time-of-flight. By adopting a polarization modulator and a CCD and only measuring the energy of a laser pulse to obtain range, a high resolution three-dimensional image can be acquired by the proposed three-dimensional imaging LIDAR system. Since this system only measures the energy of the laser pulse, a high bandwidth detector and a high resolution TDC are not required for high range precision. The proposed method is expected to be an alternative method for many three-dimensional imaging LIDAR system applications that require high resolution.

  12. Intraoperative 3D Ultrasonography for Image-Guided Neurosurgery

    NARCIS (Netherlands)

    Letteboer, Marloes Maria Johanna

    2004-01-01

    Stereotactic neurosurgery has evolved dramatically in recent years from the original rigid frame-based systems to the current frameless image-guided systems, which allow greater flexibility while maintaining sufficient accuracy. As these systems continue to evolve, more applications are found, and i

  13. A 3-D fluorescence imaging system incorporating structured illumination technology

    Science.gov (United States)

    Antos, L.; Emord, P.; Luquette, B.; McGee, B.; Nguyen, D.; Phipps, A.; Phillips, D.; Helguera, M.

    2010-02-01

    A currently available 2-D high-resolution, optical molecular imaging system was modified by the addition of a structured illumination source, OptigridTM, to investigate the feasibility of providing depth resolution along the optical axis. The modification involved the insertion of the OptigridTM and a lens in the path between the light source and the image plane, as well as control and signal processing software. Projection of the OptigridTM onto the imaging surface at an angle, was resolved applying the Scheimpflug principle. The illumination system implements modulation of the light source and provides a framework for capturing depth resolved mages. The system is capable of in-focus projection of the OptigridTM at different spatial frequencies, and supports the use of different lenses. A calibration process was developed for the system to achieve consistent phase shifts of the OptigridTM. Post-processing extracted depth information using depth modulation analysis using a phantom block with fluorescent sheets at different depths. An important aspect of this effort was that it was carried out by a multidisciplinary team of engineering and science students as part of a capstone senior design program. The disciplines represented are mechanical engineering, electrical engineering and imaging science. The project was sponsored by a financial grant from New York State with equipment support from two industrial concerns. The students were provided with a basic imaging concept and charged with developing, implementing, testing and validating a feasible proof-of-concept prototype system that was returned to the originator of the concept for further evaluation and characterization.

  14. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    Science.gov (United States)

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  15. Quantitative analysis of vascular heterogeneity in breast lesions using contrast-enhanced 3-D harmonic and subharmonic ultrasound imaging.

    Science.gov (United States)

    Sridharan, Anush; Eisenbrey, John R; Machado, Priscilla; Ojeda-Fournier, Haydee; Wilkes, Annina; Sevrukov, Alexander; Mattrey, Robert F; Wallace, Kirk; Chalek, Carl L; Thomenius, Kai E; Forsberg, Flemming

    2015-03-01

    Ability to visualize breast lesion vascularity and quantify the vascular heterogeneity using contrast-enhanced 3-D harmonic (HI) and subharmonic (SHI) ultrasound imaging was investigated in a clinical population. Patients (n = 134) identified with breast lesions on mammography were scanned using power Doppler imaging, contrast-enhanced 3-D HI, and 3-D SHI on a modified Logiq 9 scanner (GE Healthcare). A region of interest corresponding to ultrasound contrast agent flow was identified in 4D View (GE Medical Systems) and mapped to raw slice data to generate a map of time-intensity curves for the lesion volume. Time points corresponding to baseline, peak intensity, and washout of ultrasound contrast agent were identified and used to generate and compare vascular heterogeneity plots for malignant and benign lesions. Vascularity was observed with power Doppler imaging in 84 lesions (63 benign and 21 malignant). The 3-D HI showed flow in 8 lesions (5 benign and 3 malignant), whereas 3-D SHI visualized flow in 68 lesions (49 benign and 19 malignant). Analysis of vascular heterogeneity in the 3-D SHI volumes found benign lesions having a significant difference in vascularity between central and peripheral sections (1.71 ± 0.96 vs. 1.13 ± 0.79 dB, p < 0.001, respectively), whereas malignant lesions showed no difference (1.66 ± 1.39 vs. 1.24 ± 1.14 dB, p = 0.24), indicative of more vascular coverage. These preliminary results suggest quantitative evaluation of vascular heterogeneity in breast lesions using contrast-enhanced 3-D SHI is feasible and able to detect variations in vascularity between central and peripheral sections for benign and malignant lesions.

  16. Gradient-echo 3D imaging of Rb polarization in fiber-coupled atomic magnetometer.

    Science.gov (United States)

    Savukov, I

    2015-07-01

    The analogy between atomic and nuclear spins is exploited to implement 3D imaging of polarization inside the cell of an atomic magnetometer. The resolution of 0.8mm×1.2mm×1.4mm has been demonstrated with the gradient-echo imaging method. The imaging can be used in many applications. One such an application is the evaluation of active volume of an atomic magnetometer for sensitivity analysis and optimization. It has been found that imaging resolution is limited due to de-phasing from spin-exchange collisions and diffusion in the presence of gradients, and for a given magnetometer operational parameters, the imaging sequence has been optimized. Diffusion decay of the signal in the presence of gradients has been modeled numerically and analytically, and the analytical results, which agreed with numerical simulations, have been used to fit the spin-echo gradient measurements to extract the diffusion coefficient. The diffusion coefficient was found in agreement with previous measurements.

  17. Synthesis of image sequences for Korean sign language using 3D shape model

    Science.gov (United States)

    Hong, Mun-Ho; Choi, Chang-Seok; Kim, Chang-Seok; Jeon, Joon-Hyeon

    1995-05-01

    This paper proposes a method for offering information and realizing communication to the deaf-mute. The deaf-mute communicates with another person by means of sign language, but most people are unfamiliar with it. This method enables to convert text data into the corresponding image sequences for Korean sign language (KSL). Using a general 3D shape model of the upper body leads to generating the 3D motions of KSL. It is necessary to construct the general 3D shape model considering the anatomical structure of the human body. To obtain a personal 3D shape model, this general model is to adjust to the personal base images. Image synthesis for KSL consists of deforming a personal 3D shape model and texture-mapping the personal images onto the deformed model. The 3D motions for KSL have the facial expressions and the 3D movements of the head, trunk, arms and hands and are parameterized for easily deforming the model. These motion parameters of the upper body are extracted from a skilled signer's motion for each KSL and are stored to the database. Editing the parameters according to the inputs of text data yields to generate the image sequences of 3D motions.

  18. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    KAUST Repository

    Pan, Bing

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost. © 2014 Elsevier Ltd.

  19. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves.

    Science.gov (United States)

    Kan, Yingzhi; Zhu, Yongfeng; Tang, Liang; Fu, Qiang; Pei, Hucheng

    2016-09-19

    In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D) imaging is proposed that uses a two-dimensional (2-D) plane antenna array. First, a two-dimensional fast Fourier transform (FFT) is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT) combined with 2-D inverse FFT (IFFT) is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements.

  20. FGG-NUFFT-Based Method for Near-Field 3-D Imaging Using Millimeter Waves

    Directory of Open Access Journals (Sweden)

    Yingzhi Kan

    2016-09-01

    Full Text Available In this paper, to deal with the concealed target detection problem, an accurate and efficient algorithm for near-field millimeter wave three-dimensional (3-D imaging is proposed that uses a two-dimensional (2-D plane antenna array. First, a two-dimensional fast Fourier transform (FFT is performed on the scattered data along the antenna array plane. Then, a phase shift is performed to compensate for the spherical wave effect. Finally, fast Gaussian gridding based nonuniform FFT (FGG-NUFFT combined with 2-D inverse FFT (IFFT is performed on the nonuniform 3-D spatial spectrum in the frequency wavenumber domain to achieve 3-D imaging. The conventional method for near-field 3-D imaging uses Stolt interpolation to obtain uniform spatial spectrum samples and performs 3-D IFFT to reconstruct a 3-D image. Compared with the conventional method, our FGG-NUFFT based method is comparable in both efficiency and accuracy in the full sampled case and can obtain more accurate images with less clutter and fewer noisy artifacts in the down-sampled case, which are good properties for practical applications. Both simulation and experimental results demonstrate that the FGG-NUFFT-based near-field 3-D imaging algorithm can have better imaging performance than the conventional method for down-sampled measurements.

  1. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system

    CERN Document Server

    Baumann, Michael; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space a...

  2. How accurate are the fusion of cone-beam CT and 3-D stereophotographic images?

    Directory of Open Access Journals (Sweden)

    Yasas S N Jayaratne

    Full Text Available BACKGROUND: Cone-beam Computed Tomography (CBCT and stereophotography are two of the latest imaging modalities available for three-dimensional (3-D visualization of craniofacial structures. However, CBCT provides only limited information on surface texture. This can be overcome by combining the bone images derived from CBCT with 3-D photographs. The objectives of this study were 1 to evaluate the feasibility of integrating 3-D Photos and CBCT images 2 to assess degree of error that may occur during the above processes and 3 to identify facial regions that would be most appropriate for 3-D image registration. METHODOLOGY: CBCT scans and stereophotographic images from 29 patients were used for this study. Two 3-D images corresponding to the skin and bone were extracted from the CBCT data. The 3-D photo was superimposed on the CBCT skin image using relatively immobile areas of the face as a reference. 3-D colour maps were used to assess the accuracy of superimposition were distance differences between the CBCT and 3-D photo were recorded as the signed average and the Root Mean Square (RMS error. PRINCIPAL FINDINGS: The signed average and RMS of the distance differences between the registered surfaces were -0.018 (±0.129 mm and 0.739 (±0.239 mm respectively. The most errors were found in areas surrounding the lips and the eyes, while minimal errors were noted in the forehead, root of the nose and zygoma. CONCLUSIONS: CBCT and 3-D photographic data can be successfully fused with minimal errors. When compared to RMS, the signed average was found to under-represent the registration error. The virtual 3-D composite craniofacial models permit concurrent assessment of bone and soft tissues during diagnosis and treatment planning.

  3. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  4. DiAna, an ImageJ tool for object-based 3D co-localization and distance analysis

    OpenAIRE

    2016-01-01

    International audience; We present a new plugin for ImageJ called DiAna, for Distance Analysis, which comes with a user-friendly interface. DiAna proposes robust and accurate 3D segmentation for object extraction. The plugin performs automated object-based co-localization and distance analysis. DiAna offers an in-depth analysis of co-localization between objects and retrieves 3D measurements including co-localizing volumes and surfaces of contact. It also computes the distribution of distance...

  5. Anesthesiology training using 3D imaging and virtual reality

    Science.gov (United States)

    Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

    1996-04-01

    Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

  6. 3D super-resolution imaging by localization microscopy.

    Science.gov (United States)

    Magenau, Astrid; Gaus, Katharina

    2015-01-01

    Fluorescence microscopy is an important tool in all fields of biology to visualize structures and monitor dynamic processes and distributions. Contrary to conventional microscopy techniques such as confocal microscopy, which are limited by their spatial resolution, super-resolution techniques such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) have made it possible to observe and quantify structure and processes on the single molecule level. Here, we describe a method to image and quantify the molecular distribution of membrane-associated proteins in two and three dimensions with nanometer resolution.

  7. 3D CARS image reconstruction and pattern recognition on SHG images

    Science.gov (United States)

    Medyukhina, Anna; Vogler, Nadine; Latka, Ines; Dietzek, Benjamin; Cicchi, Riccardo; Pavone, Francesco S.; Popp, Jürgen

    2012-06-01

    Nonlinear optical imaging techniques based e.g. on coherent anti-Stokes Raman scattering (CARS) or second-harmonic generation (SHG) show great potential for in-vivo investigations of tissue. While the microspectroscopic imaging tools are established, automized data evaluation, i.e. image pattern recognition and automized image classification, of nonlinear optical images still bares great possibilities for future developments towards an objective clinical diagnosis. This contribution details the capability of nonlinear microscopy for both 3D visualization of human tissues and automated discrimination between healthy and diseased patterns using ex-vivo human skin samples. By means of CARS image alignment we show how to obtain a quasi-3D model of a skin biopsy, which allows us to trace the tissue structure in different projections. Furthermore, the potential of automated pattern and organization recognition to distinguish between healthy and keloidal skin tissue is discussed. A first classification algorithm employs the intrinsic geometrical features of collagen, which can be efficiently visualized by SHG microscopy. The shape of the collagen pattern allows conclusions about the physiological state of the skin, as the typical wavy collagen structure of healthy skin is disturbed e.g. in keloid formation. Based on the different collagen patterns a quantitative score characterizing the collagen waviness - and hence reflecting the physiological state of the tissue - is obtained. Further, two additional scoring methods for collagen organization, respectively based on a statistical analysis of the mutual organization of fibers and on FFT, are presented.

  8. Combining Street View and Aerial Images to Create Photo-Realistic 3D City Models

    OpenAIRE

    Ivarsson, Caroline

    2014-01-01

    This thesis evaluates two different approaches of using panoramic street view images for creating more photo-realistic 3D city models comparing to 3D city models based on only aerial images. The thesis work has been carried out at Blom Sweden AB with the use of their software and data. The main purpose of this thesis work has been to investigate if street view images can aid in creating more photo-realistic 3D city models on street level through an automatic or semi-automatic approach. Two di...

  9. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    OpenAIRE

    N. Soontranon; Srestasathiern, P.; Lawawirojwong, S.

    2015-01-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around $1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architec...

  10. Simulating receptive fields of human visual cortex for 3D image quality prediction.

    Science.gov (United States)

    Shao, Feng; Chen, Wanting; Lin, Wenchong; Jiang, Qiuping; Jiang, Gangyi

    2016-07-20

    Quality assessment of 3D images presents many challenges when attempting to gain better understanding of the human visual system. In this paper, we propose a new 3D image quality prediction approach by simulating receptive fields (RFs) of human visual cortex. To be more specific, we extract the RFs from a complete visual pathway, and calculate their similarity indices between the reference and distorted 3D images. The final quality score is obtained by determining their connections via support vector regression. Experimental results on three 3D image quality assessment databases demonstrate that in comparison with the most relevant existing methods, the devised algorithm achieves high consistency alignment with subjective assessment, especially for asymmetrically distorted stereoscopic images.

  11. Shape and deformation measurements of 3D objects using volume speckle field and phase retrieval

    DEFF Research Database (Denmark)

    Anand, A; Chhaniwal, VK; Almoro, Percival;

    2009-01-01

    Shape and deformation measurement of diffusely reflecting 3D objects are very important in many application areas, including quality control, nondestructive testing, and design. When rough objects are exposed to coherent beams, the scattered light produces speckle fields. A method to measure...... the shape and deformation of 3D objects from the sequential intensity measurements of volume speckle field and phase retrieval based on angular-spectrum propagation technique is described here. The shape of a convex spherical surface was measured directly from the calculated phase map, and micrometer......-sized deformation induced on a metal sheet was obtained upon subtraction of the phase, corresponding to unloaded and loaded states. Results from computer simulations confirm the experiments. (C) 2009 Optical Society of America....

  12. 3D Finite Volume Simulation of Accretion Discs with Spiral Shocks

    CERN Document Server

    Makita, M; Makita, Makoto; Matsuda, Takuya

    1998-01-01

    We perform 2D and 3D numerical simulations of an accretion disc in a close binary system using the Simplified Flux vector Splitting (SFS) finite volume method. In our calculations, gas is assumed to be the ideal one, and we calculate the cases with gamma=1.01, 1.05, 1.1 and 1.2. The mass ratio of the mass losing star to the mass accreting star is unity. Our results show that spiral shocks are formed on the accretion disc in all cases. In 2D calculations we find that the smaller gamma is, the more tightly the spiral winds. We observe this trend in 3D calculations as well in somewhat weaker sense.

  13. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    CERN Document Server

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  14. A framework for human spine imaging using a freehand 3D ultrasound system

    NARCIS (Netherlands)

    Purnama, Ketut E.; Wilkinson, Michael. H. F.; Veldhuizen, Albert G.; van Ooijen, Peter. M. A.; Lubbers, Jaap; Burgerhof, Johannes G. M.; Sardjono, Tri A.; Verkerke, Gijbertus J.

    2010-01-01

    The use of 3D ultrasound imaging to follow the progression of scoliosis, i.e., a 3D deformation of the spine, is described. Unlike other current examination modalities, in particular based on X-ray, its non-detrimental effect enables it to be used frequently to follow the progression of scoliosis wh

  15. Comparison of 2-D and 3-D estimates of placental volume in early pregnancy.

    Science.gov (United States)

    Aye, Christina Y L; Stevenson, Gordon N; Impey, Lawrence; Collins, Sally L

    2015-03-01

    Ultrasound estimation of placental volume (PlaV) between 11 and 13 wk has been proposed as part of a screening test for small-for-gestational-age babies. A semi-automated 3-D technique, validated against the gold standard of manual delineation, has been found at this stage of gestation to predict small-for-gestational-age at term. Recently, when used in the third trimester, an estimate obtained using a 2-D technique was found to correlate with placental weight at delivery. Given its greater simplicity, the 2-D technique might be more useful as part of an early screening test. We investigated if the two techniques produced similar results when used in the first trimester. The correlation between PlaV values calculated by the two different techniques was assessed in 139 first-trimester placentas. The agreement on PlaV and derived "standardized placental volume," a dimensionless index correcting for gestational age, was explored with the Mann-Whitney test and Bland-Altman plots. Placentas were categorized into five different shape subtypes, and a subgroup analysis was performed. Agreement was poor for both PlaV and standardized PlaV (p < 0.001 and p < 0.001), with the 2-D technique yielding larger estimates for both indices compared with the 3-D method. The mean difference in standardized PlaV values between the two methods was 0.007 (95% confidence interval: 0.006-0.009). The best agreement was found for regular rectangle-shaped placentas (p = 0.438 and p = 0.408). The poor correlation between the 2-D and 3-D techniques may result from the heterogeneity of placental morphology at this stage of gestation. In early gestation, the simpler 2-D estimates of PlaV do not correlate strongly with those obtained with the validated 3-D technique.

  16. Visual grading of 2D and 3D functional MRI compared with image-based descriptive measures

    Energy Technology Data Exchange (ETDEWEB)

    Ragnehed, Mattias [Linkoeping University, Division of Radiological Sciences, Radiology, IMH, Linkoeping (Sweden); Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden); Linkoeping University, Department of Medical and Health Sciences, Division of Radiological Sciences/Radiology, Faculty of Health Sciences, Linkoeping (Sweden); Leinhard, Olof Dahlqvist; Pihlsgaard, Johan; Lundberg, Peter [Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden); Linkoeping University, Division of Radiological Sciences, Radiation Physics, IMH, Linkoeping (Sweden); Wirell, Staffan [Linkoeping University, Division of Radiological Sciences, Radiology, IMH, Linkoeping (Sweden); Linkoeping University Hospital, Department of Radiology, Linkoeping (Sweden); Soekjer, Hannibal; Faegerstam, Patrik [Linkoeping University Hospital, Department of Radiology, Linkoeping (Sweden); Jiang, Bo [Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden); Smedby, Oerjan; Engstroem, Maria [Linkoeping University, Division of Radiological Sciences, Radiology, IMH, Linkoeping (Sweden); Linkoeping University, Center for Medical Image Science and Visualization, CMIV, Linkoeping (Sweden)

    2010-03-15

    A prerequisite for successful clinical use of functional magnetic resonance imaging (fMRI) is the selection of an appropriate imaging sequence. The aim of this study was to compare 2D and 3D fMRI sequences using different image quality assessment methods. Descriptive image measures, such as activation volume and temporal signal-to-noise ratio (TSNR), were compared with results from visual grading characteristics (VGC) analysis of the fMRI results. Significant differences in activation volume and TSNR were not directly reflected by differences in VGC scores. The results suggest that better performance on descriptive image measures is not always an indicator of improved diagnostic quality of the fMRI results. In addition to descriptive image measures, it is important to include measures of diagnostic quality when comparing different fMRI data acquisition methods. (orig.)

  17. Three dimensional (3d) transverse oscillation vector velocity ultrasound imaging

    DEFF Research Database (Denmark)

    2013-01-01

    An ultrasound imaging system (300) includes a transducer array (302) with a two- dimensional array of transducer elements configured to transmit an ultrasound signal and receive echoes, transmit circuitry (304) configured to control the transducer array to transmit the ultrasound signal so...... as to traverse a field of view, and receive circuitry (306) configured to receive a two dimensional set of echoes produced in response to the ultrasound signal traversing structure in the field of view, wherein the structure includes flowing structures such as flowing blood cells, organ cells etc. A beamformer...... (312) configured to beamform the echoes, and a velocity processor (314) configured to separately determine a depth velocity component, a transverse velocity component and an elevation velocity component, wherein the velocity components are determined based on the same transmitted ultrasound signal...

  18. 3D object-oriented image analysis in 3D geophysical modelling: Analysing the central part of the East African Rift System

    Science.gov (United States)

    Fadel, I.; van der Meijde, M.; Kerle, N.; Lauritsen, N.

    2015-03-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D interactive modelling environment IGMAS+, and their density contrast values were calculated using an object-based inversion technique to calculate the forward signal of the objects and compare it with the measured satellite gravity. Thus, a new object-based approach was implemented to interpret and extract the 3D subsurface objects from 3D geophysical data. We also introduce a new approach to constrain the interpretation of the satellite gravity measurements that can be applied using any 3D geophysical model.

  19. Integration of multi-modality imaging for accurate 3D reconstruction of human coronary arteries in vivo

    Science.gov (United States)

    Giannoglou, George D.; Chatzizisis, Yiannis S.; Sianos, George; Tsikaderis, Dimitrios; Matakos, Antonis; Koutkias, Vassilios; Diamantopoulos, Panagiotis; Maglaveras, Nicos; Parcharidis, George E.; Louridas, George E.

    2006-12-01

    In conventional intravascular ultrasound (IVUS)-based three-dimensional (3D) reconstruction of human coronary arteries, IVUS images are arranged linearly generating a straight vessel volume. However, with this approach real vessel curvature is neglected. To overcome this limitation an imaging method was developed based on integration of IVUS and biplane coronary angiography (BCA). In 17 coronary arteries from nine patients, IVUS and BCA were performed. From each angiographic projection, a single end-diastolic frame was selected and in each frame the IVUS catheter was interactively detected for the extraction of 3D catheter path. Ultrasound data was obtained with a sheath-based catheter and recorded on S-VHS videotape. S-VHS data was digitized and lumen and media-adventitia contours were semi-automatically detected in end-diastolic IVUS images. Each pair of contours was aligned perpendicularly to the catheter path and rotated in space by implementing an algorithm based on Frenet-Serret rules. Lumen and media-adventitia contours were interpolated through generation of intermediate contours creating a real 3D lumen and vessel volume, respectively. The absolute orientation of the reconstructed lumen was determined by back-projecting it onto both angiographic planes and comparing the projected lumen with the actual angiographic lumen. In conclusion, our method is capable of performing rapid and accurate 3D reconstruction of human coronary arteries in vivo. This technique can be utilized for reliable plaque morphometric, geometrical and hemodynamic analyses.

  20. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    Science.gov (United States)

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  1. Evaluation by quantitative image analysis of anticancer drug activity on multicellular spheroids grown in 3D matrices

    Science.gov (United States)

    Gomes, Aurélie; Russo, Adrien; Vidal, Guillaume; Demange, Elise; Pannetier, Pauline; Souguir, Zied; Lagarde, Jean-Michel; Ducommun, Bernard; Lobjois, Valérie

    2016-01-01

    Pharmacological evaluation of anticancer drugs using 3D in vitro models provides invaluable information for predicting in vivo activity. Artificial matrices are currently available that scale up and increase the power of such 3D models. The aim of the present study was to propose an efficient and robust imaging and analysis pipeline to assess with quantitative parameters the efficacy of a particular cytotoxic drug. HCT116 colorectal adenocarcinoma tumor cell multispheres were grown in a 3D physiological hyaluronic acid matrix. 3D microscopy was performed with structured illumination, whereas image processing and feature extraction were performed with custom analysis tools. This procedure makes it possible to automatically detect spheres in a large volume of matrix in 96-well plates. It was used to evaluate drug efficacy in HCT116 spheres treated with different concentrations of topotecan, a DNA topoisomerase inhibitor. Following automatic detection and quantification, changes in cluster size distribution with a topotecan concentration-dependent increase of small clusters according to drug cytotoxicity were observed. Quantitative image analysis is thus an effective means to evaluate and quantify the cytotoxic and cytostatic activities of anticancer drugs on 3D multicellular models grown in a physiological matrix. PMID:28105152

  2. Automated detection, 3D segmentation and analysis of high resolution spine MR images using statistical shape models

    Science.gov (United States)

    Neubert, A.; Fripp, J.; Engstrom, C.; Schwarz, R.; Lauer, L.; Salvado, O.; Crozier, S.

    2012-12-01

    Recent advances in high resolution magnetic resonance (MR) imaging of the spine provide a basis for the automated assessment of intervertebral disc (IVD) and vertebral body (VB) anatomy. High resolution three-dimensional (3D) morphological information contained in these images may be useful for early detection and monitoring of common spine disorders, such as disc degeneration. This work proposes an automated approach to extract the 3D segmentations of lumbar and thoracic IVDs and VBs from MR images using statistical shape analysis and registration of grey level intensity profiles. The algorithm was validated on a dataset of volumetric scans of the thoracolumbar spine of asymptomatic volunteers obtained on a 3T scanner using the relatively new 3D T2-weighted SPACE pulse sequence. Manual segmentations and expert radiological findings of early signs of disc degeneration were used in the validation. There was good agreement between manual and automated segmentation of the IVD and VB volumes with the mean Dice scores of 0.89 ± 0.04 and 0.91 ± 0.02 and mean absolute surface distances of 0.55 ± 0.18 mm and 0.67 ± 0.17 mm respectively. The method compares favourably to existing 3D MR segmentation techniques for VBs. This is the first time IVDs have been automatically segmented from 3D volumetric scans and shape parameters obtained were used in preliminary analyses to accurately classify (100% sensitivity, 98.3% specificity) disc abnormalities associated with early degenerative changes.

  3. Multi-Scale Characterization of the PEPCK-Cmus Mouse through 3D Cryo-Imaging

    Directory of Open Access Journals (Sweden)

    Debashish Roy

    2010-01-01

    Full Text Available We have developed, for the Case 3D Cryo-imaging system, a specialized, multiscale visualization scheme which provides color-rich volume rendering and multiplanar reformatting enabling one to visualize an entire mouse and zoom in to organ, tissue, and microscopic scales. With this system, we have anatomically characterized, in 3D, from whole animal to tissue level, a transgenic mouse and compared it with its control. The transgenic mouse overexpresses the cytosolic form of phosphoenolpyruvate carboxykinase (PEPCK-C in its skeletal muscle and is capable of greatly enhanced physical endurance and has a longer life-span and reproductive life as compared to control animals. We semiautomatically analyzed selected organs such as kidney, heart, adrenal gland, spleen, and ovaries and found comparatively enlarged heart, much less visceral, subcutaneous, and pericardial adipose tissue, and higher tibia-to-femur ratio in the transgenic animal. Microscopically, individual skeletal muscle fibers, fine mesenteric blood vessels, and intestinal villi, among others, were clearly seen.

  4. A 3D assessment tool for accurate volume measurement for monitoring the evolution of cutaneous leishmaniasis wounds.

    Science.gov (United States)

    Zvietcovich, Fernando; Castañeda, Benjamin; Valencia, Braulio; Llanos-Cuentas, Alejandro

    2012-01-01

    Clinical assessment and outcome metrics are serious weaknesses identified on the systematic reviews of cutaneous Leishmaniasis wounds. Methods with high accuracy and low-variability are required to standarize study outcomes in clinical trials. This work presents a precise, complete and noncontact 3D assessment tool for monitoring the evolution of cutaneous Leishmaniasis (CL) wounds based on a 3D laser scanner and computer vision algorithms. A 3D mesh of the wound is obtained by a commercial 3D laser scanner. Then, a semi-automatic segmentation using active contours is performed to separate the ulcer from the healthy skin. Finally, metrics of volume, area, perimeter and depth are obtained from the mesh. Traditional manual 3D and 3D measurements are obtained as a gold standard. Experiments applied to phantoms and real CL wounds suggest that the proposed 3D assessment tool provides higher accuracy (error 3D assessment tool provides high accuracy metrics which deserve more formal prospective study.

  5. D3D augmented reality imaging system: proof of concept in mammography

    Directory of Open Access Journals (Sweden)

    Douglas DB

    2016-08-01

    Full Text Available David B Douglas,1 Emanuel F Petricoin,2 Lance Liotta,2 Eugene Wilson3 1Department of Radiology, Stanford University, Palo Alto, CA, 2Center for Applied Proteomics and Molecular Medicine, George Mason University, Manassas, VA, 3Department of Radiology, Fort Benning, Columbus, GA, USA Purpose: The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D augmented reality”. Materials and methods: A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results: The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion: The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. Keywords: augmented reality, 3D medical imaging, radiology, depth perception

  6. Cell type-specific adaptation of cellular and nuclear volume in micro-engineered 3D environments.

    Science.gov (United States)

    Greiner, Alexandra M; Klein, Franziska; Gudzenko, Tetyana; Richter, Benjamin; Striebel, Thomas; Wundari, Bayu G; Autenrieth, Tatjana J; Wegener, Martin; Franz, Clemens M; Bastmeyer, Martin

    2015-11-01

    Bio-functionalized three-dimensional (3D) structures fabricated by direct laser writing (DLW) are structurally and mechanically well-defined and ideal for systematically investigating the influence of three-dimensionality and substrate stiffness on cell behavior. Here, we show that different fibroblast-like and epithelial cell lines maintain normal proliferation rates and form functional cell-matrix contacts in DLW-fabricated 3D scaffolds of different mechanics and geometry. Furthermore, the molecular composition of cell-matrix contacts forming in these 3D micro-environments and under conventional 2D culture conditions is identical, based on the analysis of several marker proteins (paxillin, phospho-paxillin, phospho-focal adhesion kinase, vinculin, β1-integrin). However, fibroblast-like and epithelial cells differ markedly in the way they adapt their total cell and nuclear volumes in 3D environments. While fibroblast-like cell lines display significantly increased cell and nuclear volumes in 3D substrates compared to 2D substrates, epithelial cells retain similar cell and nuclear volumes in 2D and 3D environments. Despite differential cell volume regulation between fibroblasts and epithelial cells in 3D environments, the nucleus-to-cell (N/C) volume ratios remain constant for all cell types and culture conditions. Thus, changes in cell and nuclear volume during the transition from 2D to 3D environments are strongly cell type-dependent, but independent of scaffold stiffness, while cells maintain the N/C ratio regardless of culture conditions.

  7. Software for browsing sectioned images of a dog body and generating a 3D model.

    Science.gov (United States)

    Park, Jin Seo; Jung, Yong Wook

    2016-01-01

    The goals of this study were (1) to provide accessible and instructive browsing software for sectioned images and a portable document format (PDF) file that includes three-dimensional (3D) models of an entire dog body and (2) to develop techniques for segmentation and 3D modeling that would enable an investigator to perform these tasks without the aid of a computer engineer. To achieve these goals, relatively important or large structures in the sectioned images were outlined to generate segmented images. The sectioned and segmented images were then packaged into browsing software. In this software, structures in the sectioned images are shown in detail and in real color. After 3D models were made from the segmented images, the 3D models were exported into a PDF file. In this format, the 3D models could be manipulated freely. The browsing software and PDF file are available for study by students, for lecture for teachers, and for training for clinicians. These files will be helpful for anatomical study by and clinical training of veterinary students and clinicians. Furthermore, these techniques will be useful for researchers who study two-dimensional images and 3D models.

  8. Traversing and labeling interconnected vascular tree structures from 3D medical images

    Science.gov (United States)

    O'Dell, Walter G.; Govindarajan, Sindhuja Tirumalai; Salgia, Ankit; Hegde, Satyanarayan; Prabhakaran, Sreekala; Finol, Ender A.; White, R. James

    2014-03-01

    Purpose: Detailed characterization of pulmonary vascular anatomy has important applications for the diagnosis and management of a variety of vascular diseases. Prior efforts have emphasized using vessel segmentation to gather information on the number or branches, number of bifurcations, and branch length and volume, but accurate traversal of the vessel tree to identify and repair erroneous interconnections between adjacent branches and neighboring tree structures has not been carefully considered. In this study, we endeavor to develop and implement a successful approach to distinguishing and characterizing individual vascular trees from among a complex intermingling of trees. Methods: We developed strategies and parameters in which the algorithm identifies and repairs false branch inter-tree and intra-tree connections to traverse complicated vessel trees. A series of two-dimensional (2D) virtual datasets with a variety of interconnections were constructed for development, testing, and validation. To demonstrate the approach, a series of real 3D computed tomography (CT) lung datasets were obtained, including that of an anthropomorphic chest phantom; an adult human chest CT; a pediatric patient chest CT; and a micro-CT of an excised rat lung preparation. Results: Our method was correct in all 2D virtual test datasets. For each real 3D CT dataset, the resulting simulated vessel tree structures faithfully depicted the vessel tree structures that were originally extracted from the corresponding lung CT scans. Conclusion: We have developed a comprehensive strategy for traversing and labeling interconnected vascular trees and successfully implemented its application to pulmonary vessels observed using 3D CT images of the chest.

  9. 3D Finite Volume Modeling of ENDE Using Electromagnetic T-Formulation

    Directory of Open Access Journals (Sweden)

    Yue Li

    2012-01-01

    Full Text Available An improved method which can analyze the eddy current density in conductor materials using finite volume method is proposed on the basis of Maxwell equations and T-formulation. The algorithm is applied to solve 3D electromagnetic nondestructive evaluation (E’NDE benchmark problems. The computing code is applied to study an Inconel 600 work piece with holes or cracks. The impedance change due to the presence of the crack is evaluated and compared with the experimental data of benchmark problems No. 1 and No. 2. The results show a good agreement between both calculated and measured data.

  10. An adaptive 3-D discrete cosine transform coder for medical image compression.

    Science.gov (United States)

    Tai, S C; Wu, Y G; Lin, C W

    2000-09-01

    In this communication, a new three-dimensional (3-D) discrete cosine transform (DCT) coder for medical images is presented. In the proposed method, a segmentation technique based on the local energy magnitude is used to segment subblocks of the image into different energy levels. Then, those subblocks with the same energy level are gathered to form a 3-D cuboid. Finally, 3-D DCT is employed to compress the 3-D cuboid individually. Simulation results show that the reconstructed images achieve a bit rate lower than 0.25 bit per pixel even when the compression ratios are higher than 35. As compared with the results by JPEG and other strategies, it is found that the proposed method achieves better qualities of decoded images than by JPEG and the other strategies.

  11. A Compact, Wide Area Surveillance 3D Imaging LIDAR Providing UAS Sense and Avoid Capabilities Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Eye safe 3D Imaging LIDARS when combined with advanced very high sensitivity, large format receivers can provide a robust wide area search capability in a very...

  12. Interpretation of a 3D Seismic-Reflection Volume in the Basin and Range, Hawthorne, Nevada

    Science.gov (United States)

    Louie, J. N.; Kell, A. M.; Pullammanappallil, S.; Oldow, J. S.; Sabin, A.; Lazaro, M.

    2009-12-01

    A collaborative effort by the Great Basin Center for Geothermal Energy at the University of Nevada, Reno, and Optim Inc. of Reno has interpreted a 3d seismic data set recorded by the U.S. Navy Geothermal Programs Office (GPO) at the Hawthorne Army Depot, Nevada. The 3d survey incorporated about 20 NNW-striking lines covering an area of approximately 3 by 10 km. The survey covered an alluvial area below the eastern flank of the Wassuk Range. In the reflection volume the most prominent events are interpreted to be the base of Quaternary alluvium, the Quaternary Wassuk Range-front normal fault zone, and sequences of intercalated Tertiary volcanic flows and sediments. Such a data set is rare in the Basin and Range. Our interpretation reveals structural and stratigraphic details that form a basis for rapid development of the geothermal-energy resources underlying the Depot. We interpret a map of the time-elevation of the Wassuk Range fault and its associated splays and basin-ward step faults. The range-front fault is the deepest, and its isochron map provides essentially a map of "economic basement" under the prospect area. There are three faults that are the most readily picked through vertical sections. The fault reflections show an uncertainty in the time-depth that we can interpret for them of 50 to 200 ms, due to the over-migrated appearance of the processing contractor’s prestack time-migrated data set. Proper assessment of velocities for mitigating the migration artifacts through prestack depth migration is not possible from this data set alone, as the offsets are not long enough for sufficiently deep velocity tomography. The three faults we interpreted appear as gradients in potential-field maps. In addition, the southern boundary of a major Tertiary graben may be seen within the volume as the northward termination of the strong reflections from older Tertiary volcanics. Using a transparent volume view across the survey gives a view of the volcanics in full

  13. 3D Modeling of Transformer Substation Based on Mapping and 2D Images

    Directory of Open Access Journals (Sweden)

    Lei Sun

    2016-01-01

    Full Text Available A new method for building 3D models of transformer substation based on mapping and 2D images is proposed in this paper. This method segments objects of equipment in 2D images by using k-means algorithm in determining the cluster centers dynamically to segment different shapes and then extracts feature parameters from the divided objects by using FFT and retrieves the similar objects from 3D databases and then builds 3D models by computing the mapping data. The method proposed in this paper can avoid the complex data collection and big workload by using 3D laser scanner. The example analysis shows the method can build coarse 3D models efficiently which can meet the requirements for hazardous area classification and constructions representations of transformer substation.

  14. High-throughput imaging: Focusing in on drug discovery in 3D.

    Science.gov (United States)

    Li, Linfeng; Zhou, Qiong; Voss, Ty C; Quick, Kevin L; LaBarbera, Daniel V

    2016-03-01

    3D organotypic culture models such as organoids and multicellular tumor spheroids (MCTS) are becoming more widely used for drug discovery and toxicology screening. As a result, 3D culture technologies adapted for high-throughput screening formats are prevalent. While a multitude of assays have been reported and validated for high-throughput imaging (HTI) and high-content screening (HCS) for novel drug discovery and toxicology, limited HTI/HCS with large compound libraries have been reported. Nonetheless, 3D HTI instrumentation technology is advancing and this technology is now on the verge of allowing for 3D HCS of thousands of samples. This review focuses on the state-of-the-art high-throughput imaging systems, including hardware and software, and recent literature examples of 3D organotypic culture models employing this technology for drug discovery and toxicology screening.

  15. Real-time cardiac synchronization with fixed volume frame rate for reducing physiological instabilities in 3D FMRI.

    Science.gov (United States)

    Tijssen, Rob H N; Okell, Thomas W; Miller, Karla L

    2011-08-15

    Although 2D echo-planar imaging (EPI) remains the dominant method for functional MRI (FMRI), 3D readouts are receiving more interest as these sequences have favorable signal-to-noise ratio (SNR) and enable imaging at a high isotropic resolution. Spoiled gradient-echo (SPGR) and balanced steady-state free-precession (bSSFP) are rapid sequences that are typically acquired with highly segmented 3D readouts, and thus less sensitive to image distortion and signal dropout. They therefore provide a powerful alternative for FMRI in areas with strong susceptibility offsets, such as deep gray matter structures and the brainstem. Unfortunately, the multi-shot nature of the readout makes these sequences highly sensitive to physiological fluctuations, and large signal instabilities are observed in the inferior regions of the brain. In this work a characterization of the source of these instabilities is given and a new method is presented to reduce the instabilities observed in 3D SPGR and bSSFP. Rapidly acquired single-slice data, which critically sampled the respiratory and cardiac waveforms, showed that cardiac pulsation is the dominant source of the instabilities. Simulations further showed that synchronizing the readout to the cardiac cycle minimizes the instabilities considerably. A real-time synchronization method was therefore developed, which utilizes parallel-imaging techniques to allow cardiac synchronization without alteration of the volume acquisition rate. The implemented method significantly improves the temporal stability in areas that are affected by cardiac-related signal fluctuations. In bSSFP data the tSNR in the brainstem increased by 45%, at the cost of a small reduction in tSNR in the cortical areas. In SPGR the temporal stability is improved by approximately 20% in the subcortical structures and as well as cortical gray matter when synchronization was performed.

  16. 3D-DXA: Assessing the Femoral Shape, the Trabecular Macrostructure and the Cortex in 3D from DXA images.

    Science.gov (United States)

    Humbert, Ludovic; Martelli, Yves; Fonolla, Roger; Steghofer, Martin; Di Gregorio, Silvana; Malouf, Jorge; Romera, Jordi; Barquero, Luis Miguel Del Rio

    2017-01-01

    The 3D distribution of the cortical and trabecular bone mass in the proximal femur is a critical component in determining fracture resistance that is not taken into account in clinical routine Dual-energy X-ray Absorptiometry (DXA) examination. In this paper, a statistical shape and appearance model together with a 3D-2D registration approach are used to model the femoral shape and bone density distribution in 3D from an anteroposterior DXA projection. A model-based algorithm is subsequently used to segment the cortex and build a 3D map of the cortical thickness and density. Measurements characterising the geometry and density distribution were computed for various regions of interest in both cortical and trabecular compartments. Models and measurements provided by the "3D-DXA" software algorithm were evaluated using a database of 157 study subjects, by comparing 3D-DXA analyses (using DXA scanners from three manufacturers) with measurements performed by Quantitative Computed Tomography (QCT). The mean point-to-surface distance between 3D-DXA and QCT femoral shapes was 0.93 mm. The mean absolute error between cortical thickness and density estimates measured by 3D-DXA and QCT was 0.33 mm and 72 mg/cm(3). Correlation coefficients (R) between the 3D-DXA and QCT measurements were 0.86, 0.93, and 0.95 for the volumetric bone mineral density at the trabecular, cortical, and integral compartments respectively, and 0.91 for the mean cortical thickness. 3D-DXA provides a detailed analysis of the proximal femur, including a separate assessment of the cortical layer and trabecular macrostructure, which could potentially improve osteoporosis management while maintaining DXA as the standard routine modality.

  17. Wide area 2D/3D imaging development, analysis and applications

    CERN Document Server

    Langmann, Benjamin

    2014-01-01

    Imaging technology is an important research area and it is widely utilized in a growing number of disciplines ranging from gaming, robotics and automation to medicine. In the last decade 3D imaging became popular mainly driven by the introduction of novel 3D cameras and measuring devices. These cameras are usually limited to indoor scenes with relatively low distances. Benjamin Langmann introduces medium and long-range 2D/3D cameras to overcome these limitations. He reports measurement results for these devices and studies their characteristic behavior. In order to facilitate the application o

  18. A 2D and 3D electrical impedance tomography imaging using experimental data

    OpenAIRE

    Shulga, Dmitry

    2012-01-01

    In this paper model, method and results of 2D and 3D conductivity distribution imaging using experimental data are described. The 16-electrodes prototype of computer tomography system, special Matlab and Java software were used to perform imaging procedure. The developed system can be used for experimental conductivity distribution imaging and further research work.

  19. The application of camera calibration in range-gated 3D imaging technology

    Science.gov (United States)

    Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan

    2013-09-01

    Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is

  20. US-CT 3D dual imaging by mutual display of the same sections for depicting minor changes in hepatocellular carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Fukuda, Hiroyuki, E-mail: fukuhiro1962@hotmail.com [International HIFU Center, Sanmu Medical Center Hospital, Naruto 167, Sanbu-shi, Chiba 289-1326 (Japan); Ito, Ryu; Ohto, Masao; Sakamoto, Akio [International HIFU Center, Sanmu Medical Center Hospital, Naruto 167, Sanbu-shi, Chiba 289-1326 (Japan); Otsuka, Masayuki; Togawa, Akira; Miyazaki, Masaru [Department of General Surgery, Graduate School of Medicine, Chiba University, Inohana 1-8-1, Chuo-ku, Chiba-shi, Chiba 260-0856 (Japan); Yamagata, Hitoshi [Toshiba Medical Systems Corporation, Otawara 324-0036 (Japan)

    2012-09-15

    The purpose of this study was to evaluate the usefulness of ultrasound-computed tomography (US-CT) 3D dual imaging for the detection of small extranodular growths of hepatocellular carcinoma (HCC). The clinical and pathological profiles of 10 patients with single nodular type HCC with extranodular growth (extranodular growth) who underwent a hepatectomy were evaluated using two-dimensional (2D) ultrasonography (US), three-dimensional (3D) US, 3D computed tomography (CT) and 3D US-CT dual images. Raw 3D data was converted to DICOM (Digital Imaging and Communication in Medicine) data using Echo to CT (Toshiba Medical Systems Corp., Tokyo, Japan), and the 3D DICOM data was directly transferred to the image analysis system (ZioM900, ZIOSOFT Inc., Tokyo, Japan). By inputting the angle number (x, y, z) of the 3D CT volume data into the ZioM900, multiplanar reconstruction (MPR) images of the 3D CT data were displayed in a manner such that they resembled the conventional US images. Eleven extranodular growths were detected pathologically in 10 cases. 2D US was capable of depicting only 2 of the 11 extranodular growths. 3D CT was capable of depicting 4 of the 11 extranodular growths. On the other hand, 3D US was capable of depicting 10 of the 11 extranodular growths, and 3D US-CT dual images, which enable the dual analysis of the CT and US planes, revealed all 11 extranodular growths. In conclusion, US-CT 3D dual imaging may be useful for the detection of small extranodular growths.

  1. 3D reconstructions with pixel-based images are made possible by digitally clearing plant and animal tissue

    Science.gov (United States)

    Reconstruction of 3D images from a series of 2D images has been restricted by the limited capacity to decrease the opacity of surrounding tissue. Commercial software that allows color-keying and manipulation of 2D images in true 3D space allowed us to produce 3D reconstructions from pixel based imag...

  2. SOFTWARE FOR REGIONS OF INTEREST RETRIEVAL ON MEDICAL 3D IMAGES

    Directory of Open Access Journals (Sweden)

    G. G. Stromov

    2014-01-01

    Full Text Available Background. Implementation of software for areas of interest retrieval in 3D medical images is described in this article. It has been tested against large volume of model MRIs.Material and methods. We tested software against normal and pathological (severe multiple sclerosis model MRIs from tge BrainWeb resource. Technological stack is based on open-source cross-platform solutions. We implemented storage system on Maria DB (an open-sourced fork of MySQL with P/SQL extensions. Python 2.7 scripting was used for automatization of extract-transform-load operations. The computational core is written on Java 7 with Spring framework 3. MongoDB was used as a cache in the cluster of workstations. Maven 3 was chosen as a dependency manager and build system, the project is hosted at Github.Results. As testing on SSMU's LAN has showed, software has been developed is quite efficiently retrieves ROIs are matching for the morphological substratum on pathological MRIs.Conclusion. Automation of a diagnostic process using medical imaging allows to level down the subjective component in decision making and increase the availability of hi-tech medicine. Software has shown in the article is a complex solution for ROI retrieving and segmentation process on model medical images in full-automated mode.We would like to thank Robert Vincent for great help with consulting of usage the BrainWeb resource.

  3. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    Science.gov (United States)

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The

  4. Correlation based 3-D segmentation of the left ventricle in pediatric echocardiographic images using radio-frequency data.

    Science.gov (United States)

    Nillesen, Maartje M; Lopata, Richard G P; Huisman, H J; Thijssen, Johan M; Kapusta, Livia; de Korte, Chris L

    2011-09-01

    Clinical diagnosis of heart disease might be substantially supported by automated segmentation of the endocardial surface in three-dimensional (3-D) echographic images. Because of the poor echogenicity contrast between blood and myocardial tissue in some regions and the inherent speckle noise, automated analysis of these images is challenging. A priori knowledge on the shape of the heart cannot always be relied on, e.g., in children with congenital heart disease, segmentation should be based on the echo features solely. The objective of this study was to investigate the merit of using temporal cross-correlation of radio-frequency (RF) data for automated segmentation of 3-D echocardiographic images. Maximum temporal cross-correlation (MCC) values were determined locally from the RF-data using an iterative 3-D technique. MCC values as well as a combination of MCC values and adaptive filtered, demodulated RF-data were used as an additional, external force in a deformable model approach to segment the endocardial surface and were tested against manually segmented surfaces. Results on 3-D full volume images (Philips, iE33) of 10 healthy children demonstrate that MCC values derived from the RF signal yield a useful parameter to distinguish between blood and myocardium in regions with low echogenicity contrast and incorporation of MCC improves the segmentation results significantly. Further investigation of the MCC over the whole cardiac cycle is required to exploit the full benefit of it for automated segmentation.

  5. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    Science.gov (United States)

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-03-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the "non-progressing" and "progressing" glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.

  6. Ash3d: A finite-volume, conservative numerical model for ash transport and tephra deposition

    Science.gov (United States)

    Schwaiger, Hans F.; Denlinger, Roger P.; Mastin, Larry G.

    2012-01-01

    We develop a transient, 3-D Eulerian model (Ash3d) to predict airborne volcanic ash concentration and tephra deposition during volcanic eruptions. This model simulates downwind advection, turbulent diffusion, and settling of ash injected into the atmosphere by a volcanic eruption column. Ash advection is calculated using time-varying pre-existing wind data and a robust, high-order, finite-volume method. Our routine is mass-conservative and uses the coordinate system of the wind data, either a Cartesian system local to the volcano or a global spherical system for the Earth. Volcanic ash is specified with an arbitrary number of grain sizes, which affects the fall velocity, distribution and duration of transport. Above the source volcano, the vertical mass distribution with elevation is calculated using a Suzuki distribution for a given plume height, eruptive volume, and eruption duration. Multiple eruptions separated in time may be included in a single simulation. We test the model using analytical solutions for transport. Comparisons of the predicted and observed ash distributions for the 18 August 1992 eruption of Mt. Spurr in Alaska demonstrate to the efficacy and efficiency of the routine.

  7. Building 3D aerial image in photoresist with reconstructed mask image acquired with optical microscope

    Science.gov (United States)

    Chou, C. S.; Tang, Y. P.; Chu, F. S.; Huang, W. C.; Liu, R. G.; Gau, T. S.

    2012-03-01

    Calibration of mask images on wafer becomes more important as features shrink. Two major types of metrology have been commonly adopted. One is to measure the mask image with scanning electron microscope (SEM) to obtain the contours on mask and then simulate the wafer image with optical simulator. The other is to use an optical imaging tool Aerial Image Measurement System (AIMSTM) to emulate the image on wafer. However, the SEM method is indirect. It just gathers planar contours on a mask with no consideration of optical characteristics such as 3D topography structures. Hence, the image on wafer is not predicted precisely. Though the AIMSTM method can be used to directly measure the intensity at the near field of a mask but the image measured this way is not quite the same as that on the wafer due to reflections and refractions in the films on wafer. Here, a new approach is proposed to emulate the image on wafer more precisely. The behavior of plane waves with different oblique angles is well known inside and between planar film stacks. In an optical microscope imaging system, plane waves can be extracted from the pupil plane with a coherent point source of illumination. Once plane waves with a specific coherent illumination are analyzed, the partially coherent component of waves could be reconstructed with a proper transfer function, which includes lens aberration, polarization, reflection and refraction in films. It is a new method that we can transfer near light field of a mask into an image on wafer without the disadvantages of indirect SEM measurement such as neglecting effects of mask topography, reflections and refractions in the wafer film stacks. Furthermore, with this precise latent image, a separated resist model also becomes more achievable.

  8. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer

    Science.gov (United States)

    Douglas, David B.; Boone, John M.; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Objective To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. Methods A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. Results The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. Conclusion The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice. PMID:27774517

  9. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    Science.gov (United States)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  10. 3D reconstruction of SEM images by use of optical photogrammetry software.

    Science.gov (United States)

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching.

  11. Deformation analysis of 3D tagged cardiac images using an optical flow method

    Directory of Open Access Journals (Sweden)

    Gorman Robert C

    2010-03-01

    Full Text Available Abstract Background This study proposes and validates a method of measuring 3D strain in myocardium using a 3D Cardiovascular Magnetic Resonance (CMR tissue-tagging sequence and a 3D optical flow method (OFM. Methods Initially, a 3D tag MR sequence was developed and the parameters of the sequence and 3D OFM were optimized using phantom images with simulated deformation. This method then was validated in-vivo and utilized to quantify normal sheep left ventricular functions. Results Optimizing imaging and OFM parameters in the phantom study produced sub-pixel root-mean square error (RMS between the estimated and known displacements in the x (RMSx = 0.62 pixels (0.43 mm, y (RMSy = 0.64 pixels (0.45 mm and z (RMSz = 0.68 pixels (1 mm direction, respectively. In-vivo validation demonstrated excellent correlation between the displacement measured by manually tracking tag intersections and that generated by 3D OFM (R ≥ 0.98. Technique performance was maintained even with 20% Gaussian noise added to the phantom images. Furthermore, 3D tracking of 3D cardiac motions resulted in a 51% decrease in in-plane tracking error as compared to 2D tracking. The in-vivo function studies showed that maximum wall thickening was greatest in the lateral wall, and increased from both apex and base towards the mid-ventricular region. Regional deformation patterns are in agreement with previous studies on LV function. Conclusion A novel method was developed to measure 3D LV wall deformation rapidly with high in-plane and through-plane resolution from one 3D cine acquisition.

  12. Automatic masking for robust 3D-2D image registration in image-guided spine surgery

    Science.gov (United States)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-03-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  13. 3D soft tissue imaging with a mobile C-arm.

    Science.gov (United States)

    Ritter, Dieter; Orman, Jasmina; Schmidgunst, Christian; Graumann, Rainer

    2007-03-01

    We introduce a clinical prototype for 3D soft tissue imaging to support surgical or interventional procedures based on a mobile C-arm. An overview of required methods and materials is followed by first clinical images of animals and human patients including dosimetry. The mobility and flexibility of 3D C-arms gives free access to the patient and therefore avoids relocation of the patient between imaging and surgical intervention. Image fusion with diagnostic data (MRI, CT, PET) is demonstrated and promising applications for brachytherapy, RFTT and others are discussed.

  14. Development of goniophotometric imaging system for recording reflectance spectra of 3D objects

    Science.gov (United States)

    Tonsho, Kazutaka; Akao, Y.; Tsumura, Norimichi; Miyake, Yoichi

    2001-12-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and internet or virtual museum via World Wide Web. To achieve our goal, we have developed gonio-photometric imaging system by using high accurate multi-spectral camera and 3D digitizer. In this paper, gonio-photometric imaging method is introduced for recording 3D object. 5-bands images of the object are taken under 7 different illuminants angles. The 5-band image sequences are then analyzed on the basis of both dichromatic reflection model and Phong model to extract gonio-photometric property of the object. The images of the 3D object under illuminants with arbitrary spectral radiant distribution, illuminating angles, and visual points are rendered by using OpenGL with the 3D shape and gonio-photometric property.

  15. Intelligent multisensor concept for image-guided 3D object measurement with scanning laser radar

    Science.gov (United States)

    Weber, Juergen

    1995-08-01

    This paper presents an intelligent multisensor concept for measuring 3D objects using an image guided laser radar scanner. The field of application are all kinds of industrial inspection and surveillance tasks where it is necessary to detect, measure and recognize 3D objects in distances up to 10 m with high flexibility. Such applications might be the surveillance of security areas or container storages as well as navigation and collision avoidance of autonomous guided vehicles. The multisensor system consists of a standard CCD matrix camera and a 1D laser radar ranger which is mounted to a 2D mirror scanner. With this sensor combination it is possible to acquire gray scale intensity data as well as absolute 3D information. To improve the system performance and flexibility, the intensity data of the scene captured by the camera can be used to focus the measurement of the 3D sensor to relevant areas. The camera guidance of the laser scanner is useful because the acquisition of spatial information is relatively slow compared to the image sensor's ability to snap an image frame in 40 ms. Relevant areas in a scene are located by detecting edges of objects utilizing various image processing algorithms. The complete sensor system is controlled by three microprocessors carrying out the 3D data acquisition, the image processing tasks and the multisensor integration. The paper deals with the details of the multisensor concept. It describes the process of sensor guidance and 3D measurement and presents some practical results of our research.

  16. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    Science.gov (United States)

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  17. 3D-MSCT imaging of bullet trajectory in 3D crime scene reconstruction: two case reports.

    Science.gov (United States)

    Colard, T; Delannoy, Y; Bresson, F; Marechal, C; Raul, J S; Hedouin, V

    2013-11-01

    Postmortem investigations are increasingly assisted by three-dimensional multi-slice computed tomography (3D-MSCT) and have become more available to forensic pathologists over the past 20years. In cases of ballistic wounds, 3D-MSCT can provide an accurate description of the bullet location, bone fractures and, more interestingly, a clear visual of the intracorporeal trajectory (bullet track). These forensic medical examinations can be combined with tridimensional bullet trajectory reconstructions created by forensic ballistic experts. These case reports present the implementation of tridimensional methods and the results of 3D crime scene reconstruction in two cases. The authors highlight the value of collaborations between police forensic experts and forensic medicine institutes through the incorporation of 3D-MSCT data in a crime scene reconstruction, which is of great interest in forensic science as a clear visual communication tool between experts and the court.

  18. An Image Hiding Scheme Using 3D Sawtooth Map and Discrete Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Ruisong Ye

    2012-07-01

    Full Text Available An image encryption scheme based on the 3D sawtooth map is proposed in this paper. The 3D sawtooth map is utilized to generate chaotic orbits to permute the pixel positions and to generate pseudo-random gray value sequences to change the pixel gray values. The image encryption scheme is then applied to encrypt the secret image which will be imbedded in one host image. The encrypted secret image and the host image are transformed by the wavelet transform and then are merged in the frequency domain. Experimental results show that the stego-image looks visually identical to the original host one and the secret image can be effectively extracted upon image processing attacks, which demonstrates strong robustness against a variety of attacks.

  19. 3D X-ray imaging methods in support catheter ablations of cardiac arrhythmias.

    Science.gov (United States)

    Stárek, Zdeněk; Lehar, František; Jež, Jiří; Wolf, Jiří; Novák, Miroslav

    2014-10-01

    Cardiac arrhythmias are a very frequent illness. Pharmacotherapy is not very effective in persistent arrhythmias and brings along a number of risks. Catheter ablation has became an effective and curative treatment method over the past 20 years. To support complex arrhythmia ablations, the 3D X-ray cardiac cavities imaging is used, most frequently the 3D reconstruction of CT images. The 3D cardiac rotational angiography (3DRA) represents a modern method enabling to create CT like 3D images on a standard X-ray machine equipped with special software. Its advantage lies in the possibility to obtain images during the procedure, decreased radiation dose and reduction of amount of the contrast agent. The left atrium model is the one most frequently used for complex atrial arrhythmia ablations, particularly for atrial fibrillation. CT data allow for creation and segmentation of 3D models of all cardiac cavities. Recently, a research has been made proving the use of 3DRA to create 3D models of other cardiac (right ventricle, left ventricle, aorta) and non-cardiac structures (oesophagus). They can be used during catheter ablation of complex arrhythmias to improve orientation during the construction of 3D electroanatomic maps, directly fused with 3D electroanatomic systems and/or fused with fluoroscopy. An intensive development in the 3D model creation and use has taken place over the past years and they became routinely used during catheter ablations of arrhythmias, mainly atrial fibrillation ablation procedures. Further development may be anticipated in the future in both the creation and use of these models.

  20. Wound Measurement Techniques: Comparing the Use of Ruler Method, 2D Imaging and 3D Scanner.

    Science.gov (United States)

    Shah, Aj; Wollak, C; Shah, J B

    2013-12-01

    The statistics on the growing number of non-healing wounds is alarming. In the United States, chronic wounds affect 6.5 million patients. An estimated US $25 billion is spent annually on treatment of chronic wounds and the burden is rapidly growing due to increasing health care costs, an aging population and a sharp rise in the incidence of diabetes and obesity worldwide.(1) Accurate wound measurement techniques will help health care personnel to monitor the wounds which will indirectly help improving care.(7,9) The clinical practice of measuring wounds has not improved even today.(2,3) A common method like the ruler method to measure wounds has poor interrater and intrarater reliability.(2,3) Measuring the greatest length by the greatest width perpendicular to the greatest length, the perpendicular method, is more valid and reliable than other ruler based methods.(2) Another common method like acetate tracing is more accurate than the ruler method but still has its disadvantages. These common measurement techniques are time consuming with variable inaccuracies. In this study, volumetric measurements taken with a non-contact 3-D scanner are benchmarked against the common ruler method, acetate grid tracing, and 2-D image planimetry volumetric measurement technique. A liquid volumetric fill method is used as the control volume. Results support the hypothesis that the 3-D scanner consistently shows accurate volumetric measurements in comparison to standard volumetric measurements obtained by the waterfill technique (average difference of 11%). The 3-D scanner measurement technique was found more reliable and valid compared to other three techniques, the ruler method (average difference of 75%), acetate grid tracing (average difference of 41%), and 2D planimetric measurements (average difference of 52%). Acetate tracing showed more accurate measurements compared to the ruler method (average difference of 41% (acetate tracing) compared to 75% (ruler method)). Improving

  1. In vivo 3D PIXE-micron-CT imaging of Drosophila melanogaster using a contrast agent

    Science.gov (United States)

    Matsuyama, Shigeo; Hamada, Naoki; Ishii, Keizo; Nozawa, Yuichiro; Ohkura, Satoru; Terakawa, Atsuki; Hatori, Yoshinobu; Fujiki, Kota; Fujiwara, Mitsuhiro; Toyama, Sho

    2015-04-01

    In this study, we developed a three-dimensional (3D) computed tomography (CT) in vivo imaging system for imaging small insects with micrometer resolution. The 3D CT imaging system, referred to as 3D PIXE-micron-CT (PIXEμCT), uses characteristic X-rays produced by ion microbeam bombardment of a metal target. PIXEμCT was used to observe the body organs and internal structure of a living Drosophila melanogaster. Although the organs of the thorax were clearly imaged, the digestive organs in the abdominal cavity could not be clearly discerned initially, with the exception of the rectum and the Malpighian tubule. To enhance the abdominal images, a barium sulfate powder radiocontrast agent was added. For the first time, 3D images of the ventriculus of a living D. melanogaster were obtained. Our results showed that PIXEμCT can provide in vivo 3D-CT images that reflect correctly the structure of individual living organs, which is expected to be very useful in biological research.

  2. 3D city models completion by fusing lidar and image data

    Science.gov (United States)

    Grammatikopoulos, L.; Kalisperakis, I.; Petsa, E.; Stentoumis, C.

    2015-05-01

    A fundamental step in the generation of visually detailed 3D city models is the acquisition of high fidelity 3D data. Typical approaches employ DSM representations usually derived from Lidar (Light Detection and Ranging) airborne scanning or image based procedures. In this contribution, we focus on the fusion of data from both these methods in order to enhance or complete them. Particularly, we combine an existing Lidar and orthomosaic dataset (used as reference), with a new aerial image acquisition (including both vertical and oblique imagery) of higher resolution, which was carried out in the area of Kallithea, in Athens, Greece. In a preliminary step, a digital orthophoto and a DSM is generated from the aerial images in an arbitrary reference system, by employing a Structure from Motion and dense stereo matching framework. The image-to-Lidar registration is performed by 2D feature (SIFT and SURF) extraction and matching among the two orthophotos. The established point correspondences are assigned with 3D coordinates through interpolation on the reference Lidar surface, are then backprojected onto the aerial images, and finally matched with 2D image features located in the vicinity of the backprojected 3D points. Consequently, these points serve as Ground Control Points with appropriate weights for final orientation and calibration of the images through a bundle adjustment solution. By these means, the aerial imagery which is optimally aligned to the reference dataset can be used for the generation of an enhanced and more accurately textured 3D city model.

  3. Gonio photometric imaging for recording of reflectance spectra of 3D objects

    Science.gov (United States)

    Miyake, Yoichi; Tsumura, Norimichi; Haneishi, Hideaki; Hayashi, Junichiro

    2002-06-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and Internet or virtual museum via World Wide Web. To archive our goal, we have developed the multi-spectral imaging systems to record and estimate reflectance spectra of the art paints based on principal component analysis and Wiener estimation method. In this paper, Gonio photometric imaging method is introduced for recording of 3D object. Five-band images of the object are taken under seven different illuminants angles. The set of five-band images are then analyzed on the basis of both dichromatic reflection model and Phong model to extract Gonio photometric information of the object. Prediction of reproduced images of the object under several illuminants and illumination angles is demonstrated and images that are synthesized with 3D wire frame image taken by 3D digitizer are also presented.

  4. Image Sequence Fusion and Denoising Based on 3D Shearlet Transform

    Directory of Open Access Journals (Sweden)

    Liang Xu

    2014-01-01

    Full Text Available We propose a novel algorithm for image sequence fusion and denoising simultaneously in 3D shearlet transform domain. In general, the most existing image fusion methods only consider combining the important information of source images and do not deal with the artifacts. If source images contain noises, the noises may be also transferred into the fusion image together with useful pixels. In 3D shearlet transform domain, we propose that the recursive filter is first performed on the high-pass subbands to obtain the denoised high-pass coefficients. The high-pass subbands are then combined to employ the fusion rule of the selecting maximum based on 3D pulse coupled neural network (PCNN, and the low-pass subband is fused to use the fusion rule of the weighted sum. Experimental results demonstrate that the proposed algorithm yields the encouraging effects.

  5. Technical Note: Characterization of custom 3D printed multimodality imaging phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Bieniosek, Matthew F. [Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, California 94305 (United States); Lee, Brian J. [Department of Mechanical Engineering, Stanford University, 440 Escondido Mall, Stanford, California 94305 (United States); Levin, Craig S., E-mail: cslevin@stanford.edu [Departments of Radiology, Physics, Bioengineering and Electrical Engineering, Stanford University, 300 Pasteur Dr., Stanford, California 94305-5128 (United States)

    2015-10-15

    Purpose: Imaging phantoms are important tools for researchers and technicians, but they can be costly and difficult to customize. Three dimensional (3D) printing is a widely available rapid prototyping technique that enables the fabrication of objects with 3D computer generated geometries. It is ideal for quickly producing customized, low cost, multimodal, reusable imaging phantoms. This work validates the use of 3D printed phantoms by comparing CT and PET scans of a 3D printed phantom and a commercial “Micro Deluxe” phantom. This report also presents results from a customized 3D printed PET/MRI phantom, and a customized high resolution imaging phantom with sub-mm features. Methods: CT and PET scans of a 3D printed phantom and a commercial Micro Deluxe (Data Spectrum Corporation, USA) phantom with 1.2, 1.6, 2.4, 3.2, 4.0, and 4.8 mm diameter hot rods were acquired. The measured PET and CT rod sizes, activities, and attenuation coefficients were compared. A PET/MRI scan of a custom 3D printed phantom with hot and cold rods was performed, with photon attenuation and normalization measurements performed with a separate 3D printed normalization phantom. X-ray transmission scans of a customized two level high resolution 3D printed phantom with sub-mm features were also performed. Results: Results show very good agreement between commercial and 3D printed micro deluxe phantoms with less than 3% difference in CT measured rod diameter, less than 5% difference in PET measured rod diameter, and a maximum of 6.2% difference in average rod activity from a 10 min, 333 kBq/ml (9 μCi/ml) Siemens Inveon (Siemens Healthcare, Germany) PET scan. In all cases, these differences were within the measurement uncertainties of our setups. PET/MRI scans successfully identified 3D printed hot and cold rods on PET and MRI modalities. X-ray projection images of a 3D printed high resolution phantom identified features as small as 350 μm wide. Conclusions: This work shows that 3D printed

  6. Quantitative 3D imaging of whole, unstained cells by using X-ray diffraction microscopy.

    Science.gov (United States)

    Jiang, Huaidong; Song, Changyong; Chen, Chien-Chun; Xu, Rui; Raines, Kevin S; Fahimian, Benjamin P; Lu, Chien-Hung; Lee, Ting-Kuo; Nakashima, Akio; Urano, Jun; Ishikawa, Tetsuya; Tamanoi, Fuyuhiko; Miao, Jianwei

    2010-06-22

    Microscopy has greatly advanced our understanding of biology. Although significant progress has recently been made in optical microscopy to break the diffraction-limit barrier, reliance of such techniques on fluorescent labeling technologies prohibits quantitative 3D imaging of the entire contents of cells. Cryoelectron microscopy can image pleomorphic structures at a resolution of 3-5 nm, but is only applicable to thin or sectioned specimens. Here, we report quantitative 3D imaging of a whole, unstained cell at a resolution of 50-60 nm by X-ray diffraction microscopy. We identified the 3D morphology and structure of cellular organelles including cell wall, vacuole, endoplasmic reticulum, mitochondria, granules, nucleus, and nucleolus inside a yeast spore cell. Furthermore, we observed a 3D structure protruding from the reconstructed yeast spore, suggesting the spore germination process. Using cryogenic technologies, a 3D resolution of 5-10 nm should be achievable by X-ray diffraction microscopy. This work hence paves a way for quantitative 3D imaging of a wide range of biological specimens at nanometer-scale resolutions that are too thick for electron microscopy.

  7. 3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies

    Science.gov (United States)

    Periverzov, Frol; Ilieş, Horea T.

    2012-09-01

    Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

  8. Online 3D terrain visualisation using Unity 3D game engine: A comparison of different contour intervals terrain data draped with UAV images

    Science.gov (United States)

    Hafiz Mahayudin, Mohd; Che Mat, Ruzinoor

    2016-06-01

    The main objective of this paper is to discuss on the effectiveness of visualising terrain draped with Unmanned Aerial Vehicle (UAV) images generated from different contour intervals using Unity 3D game engine in online environment. The study area that was tested in this project was oil palm plantation at Sintok, Kedah. The contour data used for this study are divided into three different intervals which are 1m, 3m and 5m. ArcGIS software were used to clip the contour data and also UAV images data to be similar size for the overlaying process. The Unity 3D game engine was used as the main platform for developing the system due to its capabilities which can be launch in different platform. The clipped contour data and UAV images data were process and exported into the web format using Unity 3D. Then process continue by publishing it into the web server for comparing the effectiveness of different 3D terrain data (contour data) draped with UAV images. The effectiveness is compared based on the data size, loading time (office and out-of-office hours), response time, visualisation quality, and frame per second (fps). The results were suggest which contour interval is better for developing an effective online 3D terrain visualisation draped with UAV images using Unity 3D game engine. It therefore benefits decision maker and planner related to this field decide on which contour is applicable for their task.

  9. Portable high-intensity focused ultrasound system with 3D electronic steering, real-time cavitation monitoring, and 3D image reconstruction algorithms: a preclinical study in pigs

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jin Woo; Lee, Jae Young; Hwang, Eui Jin; Hwang, In Pyeong; Woo, Sung Min; Lee, Chang Joo; Park, Eun Joo; Choi, Byung Ihn [Dept. of Radiology and Institute of Radiation Medicine, Seoul National University Hospital, Seoul (Korea, Republic of)

    2014-10-15

    The aim of this study was to evaluate the safety and accuracy of a new portable ultrasonography-guided high-intensity focused ultrasound (USg-HIFU) system with a 3-dimensional (3D) electronic steering transducer, a simultaneous ablation and imaging module, real-time cavitation monitoring, and 3D image reconstruction algorithms. To address the accuracy of the transducer, hydrophones in a water chamber were used to assess the generation of sonic fields. An animal study was also performed in five pigs by ablating in vivo thighs by single-point sonication (n=10) or volume sonication (n=10) and ex vivo kidneys by single-point sonication (n=10). Histological and statistical analyses were performed. In the hydrophone study, peak voltages were detected within 1.0 mm from the targets on the y- and z-axes and within 2.0-mm intervals along the x-axis (z-axis, direction of ultrasound propagation; y- and x-axes, perpendicular to the direction of ultrasound propagation). Twenty-nine of 30 HIFU sessions successfully created ablations at the target. The in vivo porcine thigh study showed only a small discrepancy (width, 0.5-1.1 mm; length, 3.0 mm) between the planning ultrasonograms and the pathological specimens. Inordinate thermal damage was not observed in the adjacent tissues or sonic pathways in the in vivo thigh and ex vivo kidney studies. Our study suggests that this new USg-HIFU system may be a safe and accurate technique for ablating soft tissues and encapsulated organs.

  10. Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.

    Science.gov (United States)

    Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias

    2017-01-23

    Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.

  11. 3D weighting in cone beam image reconstruction algorithms: ray-driven vs. pixel-driven.

    Science.gov (United States)

    Tang, Xiangyang; Nilsen, Roy A; Smolin, Alex; Lifland, Ilya; Samsonov, Dmitry; Taha, Basel

    2008-01-01

    A 3D weighting scheme have been proposed previously to reconstruct images at both helical and axial scans in stat-of-the-art volumetric CT scanners for diagnostic imaging. Such a 3D weighting can be implemented in the manner of either ray-driven or pixel-drive, depending on the available computation resources. An experimental study is conducted in this paper to evaluate the difference between the ray-driven and pixel-driven implementations of the 3D weighting from the perspective of image quality, while their computational complexity is analyzed theoretically. Computer simulated data and several phantoms, such as the helical body phantom and humanoid chest phantom, are employed in the experimental study, showing that both the ray-driven and pixel-driven 3D weighting provides superior image quality for diagnostic imaging in clinical applications. With the availability of image reconstruction engine at increasing computational power, it is believed that the pixel-driven 3D weighting will be dominantly employed in state-of-the-art volumetric CT scanners over clinical applications.

  12. Growth behavior of intermetallic compounds at Sn–Ag/Cu joint interfaces revealed by 3D imaging

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Q.K., E-mail: qkzhang@alum.imr.ac.cn [Shenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, Shenyang 110016 (China); State Key Laboratory of Advanced Brazing Filler Metals & Technology, Zhengzhou Research Institute of Mechanical Engineering, Zhengzhou 450001 (China); Long, W.M. [State Key Laboratory of Advanced Brazing Filler Metals & Technology, Zhengzhou Research Institute of Mechanical Engineering, Zhengzhou 450001 (China); Zhang, Z.F., E-mail: zhfzhang@imr.ac.cn [Shenyang National Laboratory for Materials Science, Institute of Metal Research, Chinese Academy of Sciences, Shenyang 110016 (China)

    2015-10-15

    In this study, the morphologies of intermetallic compounds (IMCs) at the as-soldered and thermal aged Sn–Ag/Cu joint interfaces were observed by SEM and measured using Laser Confocal Microscope, and their three-dimensional (3D) shapes were revealed using 3D imaging technology. The observation reveal that during the soldering process the Cu{sub 6}Sn{sub 5} grains at the joint interface evolve from hemispheroid to a bamboo shoot-shaped body with increasing liquid state reacting time, and their grain size increases sharply. After thermal aging, the Cu{sub 6}Sn{sub 5} grains change into equiaxed grains, while the top of some prominent Cu{sub 6}Sn{sub 5} grains changes little. Due to the higher active energy of the Sn atoms at the grain boundary, the growth rate of IMC grains around the grain boundaries of the solder is higher during the aging process. From the evolution in morphology of the IMC layer, it is demonstrated that the IMC layer grows through grain boundary diffusion of the Cu and Sn atoms during the aging process, and the volume diffusion is very little. The 3D imaging technology is used to reveal the shape and dimension of the IMC grains. - Highlights: • Morphologies of IMCs at the Sn–Ag/Cu interface were revealed by 3D imaging. • Preferential growth of IMCs around the solder grain boundaries was observed. • Growth behaviors of IMCs during reflowing and aging process were investigated.

  13. Analysis, Modeling and Dynamic Optimization of 3D Time-of-Flight Imaging Systems

    OpenAIRE

    Schmidt, Mirko

    2011-01-01

    The present thesis is concerned with the optimization of 3D Time-of-Flight (ToF) imaging systems. These novel cameras determine range images by actively illuminating a scene and measuring the time until the backscattered light is detected. Depth maps are constructed from multiple raw images. Usually two of such raw images are acquired simultaneously using special correlating sensors. This thesis covers four main contributions: A physical sensor model is presented which enables the analysis a...

  14. Detection of Connective Tissue Disorders from 3D Aortic MR Images Using Independent Component Analysis

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Zhao, Fei; Zhang, Honghai

    2006-01-01

    A computer-aided diagnosis (CAD) method is reported that allows the objective identification of subjects with connective tissue disorders from 3D aortic MR images using segmentation and independent component analysis (ICA). The first step to extend the model to 4D (3D + time) has also been taken....... ICA is an effective tool for connective tissue disease detection in the presence of sparse data using prior knowledge to order the components, and the components can be inspected visually. 3D+time MR image data sets acquired from 31 normal and connective tissue disorder subjects at end-diastole (R......-wave peak) and at 45\\$\\backslash\\$% of the R-R interval were used to evaluate the performance of our method. The automated 3D segmentation result produced accurate aortic surfaces covering the aorta. The CAD method distinguished between normal and connective tissue disorder subjects with a classification...

  15. Midsagittal plane extraction from brain images based on 3D SIFT.

    Science.gov (United States)

    Wu, Huisi; Wang, Defeng; Shi, Lin; Wen, Zhenkun; Ming, Zhong

    2014-03-21

    Midsagittal plane (MSP) extraction from 3D brain images is considered as a promising technique for human brain symmetry analysis. In this paper, we present a fast and robust MSP extraction method based on 3D scale-invariant feature transform (SIFT). Unlike the existing brain MSP extraction methods, which mainly rely on the gray similarity, 3D edge registration or parameterized surface matching to determine the fissure plane, our proposed method is based on distinctive 3D SIFT features, in which the fissure plane is determined by parallel 3D SIFT matching and iterative least-median of squares plane regression. By considering the relative scales, orientations and flipped descriptors between two 3D SIFT features, we propose a novel metric to measure the symmetry magnitude for 3D SIFT features. By clustering and indexing the extracted SIFT features using a k-dimensional tree (KD-tree) implemented on graphics processing units, we can match multiple pairs of 3D SIFT features in parallel and solve the optimal MSP on-the-fly. The proposed method is evaluated by synthetic and in vivo datasets, of normal and pathological cases, and validated by comparisons with the state-of-the-art methods. Experimental results demonstrated that our method has achieved a real-time performance with better accuracy yielding an average yaw angle error below 0.91° and an average roll angle error no more than 0.89°.

  16. High-speed 3D digital image correlation vibration measurement: Recent advancements and noted limitations

    Science.gov (United States)

    Beberniss, Timothy J.; Ehrhardt, David A.

    2017-03-01

    A review of the extensive studies on the feasibility and practicality of utilizing high-speed 3 dimensional digital image correlation (3D-DIC) for various random vibration measurement applications is presented. Demonstrated capabilities include finite element model updating utilizing full-field 3D-DIC static displacements, modal survey natural frequencies, damping, and mode shape results from 3D-DIC are baselined against laser Doppler vibrometry (LDV), a comparison between foil strain gage and 3D-DIC strain, and finally the unique application to a high-speed wind tunnel fluid-structure interaction study. Results show good agreement between 3D-DIC and more traditional vibration measurement techniques. Unfortunately, 3D-DIC vibration measurement is not without its limitations, which are also identified and explored in this study. The out-of-plane sensitivity required for vibration measurement for 3D-DIC is orders of magnitude less than LDV making higher frequency displacements difficult to sense. Furthermore, the digital cameras used to capture the DIC images have no filter to eliminate temporal aliasing of the digitized signal. Ultimately DIC is demonstrated as a valid alternative means to measure structural vibrations while one unique application achieves success where more traditional methods would fail.

  17. Single-pixel 3D imaging with time-based depth resolution

    CERN Document Server

    Sun, Ming-Jie; Gibson, Graham M; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J

    2016-01-01

    Time-of-flight three dimensional imaging is an important tool for many applications, such as object recognition and remote sensing. Unlike conventional imaging approach using pixelated detector array, single-pixel imaging based on projected patterns, such as Hadamard patterns, utilises an alternative strategy to acquire information with sampling basis. Here we show a modified single-pixel camera using a pulsed illumination source and a high-speed photodiode, capable of reconstructing 128x128 pixel resolution 3D scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, we demonstrate continuous real-time 3D video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost 3D imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  18. QWIP focal plane array theoretical model of 3-D imaging LADAR system

    OpenAIRE

    El Mashade, Mohamed Bakry; AbouElez, Ahmed Elsayed

    2016-01-01

    The aim of this research is to develop a model for the direct detection three-dimensional (3-D) imaging LADAR system using Quantum Well Infrared Photodetector (QWIP) Focal Plane Array (FPA). This model is employed to study how to add 3-D imaging capability to the existing conventional thermal imaging systems of the same basic form which is sensitive to 3–5 mm (mid-wavelength infrared, MWIR) or 8–12 mm (long-wavelength infrared, LWIR) spectral bands. The integrated signal photoelectrons in cas...

  19. Correlative 3D imaging: CLSM and FIB-SEM tomography using high-pressure frozen, freeze-substituted biological samples.

    Science.gov (United States)

    Lucas, Miriam S; Guenthert, Maja; Gasser, Philippe; Lucas, Falk; Wepf, Roger

    2014-01-01

    Correlative light and electron microscopy aims at combining data from different imaging modalities, ideally from the same area of the one sample, in order to achieve a more holistic view of the hierarchical structural organization of cells and tissues. Modern 3D imaging techniques opened up new possibilities to expand morphological studies into the third dimension at the nanometer scale. Here we present an approach to correlate 3D light microscopy data with volume data from focused ion beam-scanning electron microscopy. An adapted sample preparation method based on high-pressure freezing for structure preservation, followed by freeze-substitution for multimodal en bloc imaging, is described. It is based on including fluorescent labeling during freeze-substitution, which enables histological context description of the structure of interest by confocal laser scanning microscopy prior to high-resolution electron microscopy. This information can be employed to relocate the respective structure in the electron microscope. This approach is most suitable for targeted small 3D volume correlation and has the potential to extract statistically relevant data of structural details for systems biology.

  20. Sample Preparation Strategies for Mass Spectrometry Imaging of 3D Cell Culture Models

    OpenAIRE

    Ahlf Wheatcraft, Dorothy R.; Liu, Xin; Hummon, Amanda B.

    2014-01-01

    Three dimensional cell cultures are attractive models for biological research. They combine the flexibility and cost-effectiveness of cell culture with some of the spatial and molecular complexity of tissue. For example, many cell lines form 3D structures given appropriate in vitro conditions. Colon cancer cell lines form 3D cell culture spheroids, in vitro mimics of avascular tumor nodules. While immunohistochemistry and other classical imaging methods are popular for monitoring the distribu...

  1. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    Directory of Open Access Journals (Sweden)

    Saeed Seyyedi

    2013-01-01

    Full Text Available Digital breast tomosynthesis (DBT is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART and total variation regularized reconstruction techniques (ART+TV are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM values.

  2. 3D automatic quantification applied to optically sectioned images to improve microscopy analysis

    Directory of Open Access Journals (Sweden)

    JE Diaz-Zamboni

    2009-08-01

    Full Text Available New fluorescence microscopy techniques, such as confocal or digital deconvolution microscopy, allow to easily obtain three-dimensional (3D information from specimens. However, there are few 3D quantification tools that allow extracting information of these volumes. Therefore, the amount of information acquired by these techniques is difficult to manipulate and analyze manually. The present study describes a model-based method, which for the first time shows 3D visualization and quantification of fluorescent apoptotic body signals, from optical serial sections of porcine hepatocyte spheroids correlating them to their morphological structures. The method consists on an algorithm that counts apoptotic bodies in a spheroid structure and extracts information from them, such as their centroids in cartesian and radial coordinates, relative to the spheroid centre, and their integrated intensity. 3D visualization of the extracted information, allowed us to quantify the distribution of apoptotic bodies in three different zones of the spheroid.

  3. Finite volume method in 3-D curvilinear coordinates with multiblocking procedure for radiative transport problems

    Energy Technology Data Exchange (ETDEWEB)

    Talukdar, P.; Steven, M.; Issendorff, F.V.; Trimis, D. [Institute of Fluid Mechanics (LSTM), University of Erlangen-Nuremberg, Cauerstrasse 4, D 91058 Erlangen (Germany)

    2005-10-01

    The finite volume method of radiation is implemented for complex 3-D problems in order to use it for combined heat transfer problems in connection with CFD codes. The method is applied for a 3-D block structured grid in a radiatively participating medium. The method is implemented in non-orthogonal curvilinear coordinates so that it can handle irregular structure with a body-fitted structured grid. The multiblocking is performed with overlapping blocks to exchange the information between the blocks. Five test problems are considered in this work. In the first problem, present work is validated with the results of the literature. To check the accuracy of multiblocking, a single block is divided into four blocks and results are validated against the results of the single block simulated alone in the second problem. Complicated geometries are considered to show the applicability of the present procedure in the last three problems. Both radiative and non-radiative equilibrium situations are considered along with an absorbing, emitting and scattering medium. (author)

  4. Benchmarking of state-of-the-art needle detection algorithms in 3D ultrasound data volumes

    Science.gov (United States)

    Pourtaherian, Arash; Zinger, Svitlana; de With, Peter H. N.; Korsten, Hendrikus H. M.; Mihajlovic, Nenad

    2015-03-01

    Ultrasound-guided needle interventions are widely practiced in medical diagnostics and therapy, i.e. for biopsy guidance, regional anesthesia or for brachytherapy. Needle guidance using 2D ultrasound can be very challenging due to the poor needle visibility and the limited field of view. Since 3D ultrasound transducers are becoming more widely used, needle guidance can be improved and simplified with appropriate computer-aided analyses. In this paper, we compare two state-of-the-art 3D needle detection techniques: a technique based on line filtering from literature and a system employing Gabor transformation. Both algorithms utilize supervised classification to pre-select candidate needle voxels in the volume and then fit a model of the needle on the selected voxels. The major differences between the two approaches are in extracting the feature vectors for classification and selecting the criterion for fitting. We evaluate the performance of the two techniques using manually-annotated ground truth in several ex-vivo situations of different complexities, containing three different needle types with various insertion angles. This extensive evaluation provides better understanding on the limitations and advantages of each technique under different acquisition conditions, which is leading to the development of improved techniques for more reliable and accurate localization. Benchmarking results that the Gabor features are better capable of distinguishing the needle voxels in all datasets. Moreover, it is shown that the complete processing chain of the Gabor-based method outperforms the line filtering in accuracy and stability of the detection results.

  5. Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery.

    Science.gov (United States)

    Wang, Junchen; Suenaga, Hideyuki; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro; Liao, Hongen

    2014-04-01

    Computer-assisted oral and maxillofacial surgery (OMS) has been rapidly evolving since the last decade. State-of-the-art surgical navigation in OMS still suffers from bulky tracking sensors, troublesome image registration procedures, patient movement, loss of depth perception in visual guidance, and low navigation accuracy. We present an augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery. A customized stereo camera is designed to track both the patient and instrument. Image registration is performed by patient tracking and real-time 3-D contour matching, without requiring any fiducial and reference markers. Real-time autostereoscopic 3-D imaging is implemented with the help of a consumer-level graphics processing unit. The resulting 3-D image of the patient's anatomy is overlaid on the surgical site by a half-silvered mirror using image registration and IP-camera registration to guide the surgeon by exposing hidden critical structures. The 3-D image of the surgical instrument is also overlaid over the real one for an augmented display. The 3-D images present both stereo and motion parallax from which depth perception can be obtained. Experiments were performed to evaluate various aspects of the system; the overall image overlay error of the proposed system was 0.71 mm.

  6. 3D non-rigid registration using surface and local salient features for transrectal ultrasound image-guided prostate biopsy

    Science.gov (United States)

    Yang, Xiaofeng; Akbari, Hamed; Halig, Luma; Fei, Baowei

    2011-03-01

    We present a 3D non-rigid registration algorithm for the potential use in combining PET/CT and transrectal ultrasound (TRUS) images for targeted prostate biopsy. Our registration is a hybrid approach that simultaneously optimizes the similarities from point-based registration and volume matching methods. The 3D registration is obtained by minimizing the distances of corresponding points at the surface and within the prostate and by maximizing the overlap ratio of the bladder neck on both images. The hybrid approach not only capture deformation at the prostate surface and internal landmarks but also the deformation at the bladder neck regions. The registration uses a soft assignment and deterministic annealing process. The correspondences are iteratively established in a fuzzy-to-deterministic approach. B-splines are used to generate a smooth non-rigid spatial transformation. In this study, we tested our registration with pre- and postbiopsy TRUS images of the same patients. Registration accuracy is evaluated using manual defined anatomic landmarks, i.e. calcification. The root-mean-squared (RMS) of the difference image between the reference and floating images was decreased by 62.6+/-9.1% after registration. The mean target registration error (TRE) was 0.88+/-0.16 mm, i.e. less than 3 voxels with a voxel size of 0.38×0.38×0.38 mm3 for all five patients. The experimental results demonstrate the robustness and accuracy of the 3D non-rigid registration algorithm.

  7. 3-D MRI/CT fusion imaging of the lumbar spine

    Energy Technology Data Exchange (ETDEWEB)

    Yamanaka, Yuki; Kamogawa, Junji; Misaki, Hiroshi; Kamada, Kazuo; Okuda, Shunsuke; Morino, Tadao; Ogata, Tadanori; Yamamoto, Haruyasu [Ehime University, Department of Bone and Joint Surgery, Toon-shi, Ehime (Japan); Katagi, Ryosuke; Kodama, Kazuaki [Katagi Neurological Surgery, Imabari-shi, Ehime (Japan)

    2010-03-15

    The objective was to demonstrate the feasibility of MRI/CT fusion in demonstrating lumbar nerve root compromise. We combined 3-dimensional (3-D) computed tomography (CT) imaging of bone with 3-D magnetic resonance imaging (MRI) of neural architecture (cauda equina and nerve roots) for two patients using VirtualPlace software. Although the pathological condition of nerve roots could not be assessed using MRI, myelography or CT myelography, 3-D MRI/CT fusion imaging enabled unambiguous, 3-D confirmation of the pathological state and courses of nerve roots, both inside and outside the foraminal arch, as well as thickening of the ligamentum flavum and the locations, forms and numbers of dorsal root ganglia. Positional relationships between intervertebral discs or bony spurs and nerve roots could also be depicted. Use of 3-D MRI/CT fusion imaging for the lumbar vertebral region successfully revealed the relationship between bone construction (bones, intervertebral joints, and intervertebral disks) and neural architecture (cauda equina and nerve roots) on a single film, three-dimensionally and in color. Such images may be useful in elucidating complex neurological conditions such as degenerative lumbar scoliosis(DLS), as well as in diagnosis and the planning of minimally invasive surgery. (orig.)

  8. Integration of Video Images and CAD Wireframes for 3d Object Localization

    Science.gov (United States)

    Persad, R. A.; Armenakis, C.; Sohn, G.

    2012-07-01

    The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ) surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT). Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  9. The 3D scanner prototype utilize object profile imaging using line laser and octave software

    Science.gov (United States)

    Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus

    2016-11-01

    Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.

  10. 3D FACE RECOGNITION FROM RANGE IMAGES BASED ON CURVATURE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Suranjan Ganguly

    2014-02-01

    Full Text Available In this paper, we present a novel approach for three-dimensional face recognition by extracting the curvature maps from range images. There are four types of curvature maps: Gaussian, Mean, Maximum and Minimum curvature maps. These curvature maps are used as a feature for 3D face recognition purpose. The dimension of these feature vectors is reduced using Singular Value Decomposition (SVD technique. Now from calculated three components of SVD, the non-negative values of ‘S’ part of SVD is ranked and used as feature vector. In this proposed method, two pair-wise curvature computations are done. One is Mean, and Maximum curvature pair and another is Gaussian and Mean curvature pair. These are used to compare the result for better recognition rate. This automated 3D face recognition system is focused in different directions like, frontal pose with expression and illumination variation, frontal face along with registered face, only registered face and registered face from different pose orientation across X, Y and Z axes. 3D face images used for this research work are taken from FRAV3D database. The pose variation of 3D facial image is being registered to frontal pose by applying one to all registration technique then curvature mapping is applied on registered face images along with remaining frontal face images. For the classification and recognition purpose five layer feed-forward back propagation neural network classifiers is used, and the corresponding result is discussed in section 4.

  11. 3D terahertz synthetic aperture imaging of objects with arbitrary boundaries

    Science.gov (United States)

    Kniffin, G. P.; Zurk, L. M.; Schecklman, S.; Henry, S. C.

    2013-09-01

    Terahertz (THz) imaging has shown promise for nondestructive evaluation (NDE) of a wide variety of manufactured products including integrated circuits and pharmaceutical tablets. Its ability to penetrate many non-polar dielectrics allows tomographic imaging of an object's 3D structure. In NDE applications, the material properties of the target(s) and background media are often well-known a priori and the objective is to identify the presence and/or 3D location of structures or defects within. The authors' earlier work demonstrated the ability to produce accurate 3D images of conductive targets embedded within a high-density polyethylene (HDPE) background. That work assumed a priori knowledge of the refractive index of the HDPE as well as the physical location of the planar air-HDPE boundary. However, many objects of interest exhibit non-planar interfaces, such as varying degrees of curvature over the extent of the surface. Such irregular boundaries introduce refraction effects and other artifacts that distort 3D tomographic images. In this work, two reconstruction techniques are applied to THz synthetic aperture tomography; a holographic reconstruction method that accurately detects the 3D location of an object's irregular boundaries, and a split­-step Fourier algorithm that corrects the artifacts introduced by the surface irregularities. The methods are demonstrated with measurements from a THz time-domain imaging system.

  12. Automatic Texture Reconstruction of 3d City Model from Oblique Images

    Science.gov (United States)

    Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang

    2016-06-01

    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  13. Non-contrast enhanced MR venography using 3D fresh blood imaging (FBI). Initial experience

    Energy Technology Data Exchange (ETDEWEB)

    Yokoyama, Kenichi; Nitatori, Toshiaki; Inaoka, Sayuki; Takahara, Taro; Hachiya, Junichi [Kyorin Univ., Mitaka, Tokyo (Japan). School of Medicine

    2001-10-01

    This study examined the efficacy of 3D-fresh blood imaging (FBI) in patients with venous disease in the iliac region to lower extremity. Fourteen patients with venous disease were examined [8 deep venous thrombosis (DVT) and 6 varix] by 3D-FBI and 2D-TOF MRA. ALL FBI images and 2D-TOF images were evaluated in terms of visualization of the disease and compared with conventional X-ray venography (CV). The total scan time of 3D-FBI ranged from 3 min 24 sec to 4 min 52 sec. 3D-FBI was positive in all 23 anatomical levels in which DVT was diagnosed by CV (100% sensitivity) as well as 2D-TOF. The delineation of collateral veins was superior or equal to that of 2D-TOF. 3D-FBI allowed depiction of varices in five of six cases; however, in one case, the evaluation was limited because the separation of arteries from veins was difficult. The 3D-FBI technique, which allows iliac to peripheral MR venography without contrast medium within a short acquisition time, is considered clinically useful. (author)

  14. NLT and extrapolated DLT:3-D cinematography alternatives for enlarging the volume of calibration.

    Science.gov (United States)

    Hinrichs, R N; McLean, S P

    1995-10-01

    This study investigated the accuracy of the direct linear transformation (DLT) and non-linear transformation (NLT) methods of 3-D cinematography/videography. A comparison of standard DLT, extrapolated DLT, and NLT calibrations showed the standard (non-extrapolated) DLT to be the most accurate, especially when a large number of control points (40-60) were used. The NLT was more accurate than the extrapolated DLT when the level of extrapolation exceeded 100%. The results indicated that when possible one should use the DLT with a control object, sufficiently large as to encompass the entire activity being studied. However, in situations where the activity volume exceeds the size of one's DLT control object, the NLT method should be considered.

  15. Parametric modelling and segmentation of vertebral bodies in 3D CT and MR spine images

    Science.gov (United States)

    Štern, Darko; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2011-12-01

    Accurate and objective evaluation of vertebral deformations is of significant importance in clinical diagnostics and therapy of pathological conditions affecting the spine. Although modern clinical practice is focused on three-dimensional (3D) computed tomography (CT) and magnetic resonance (MR) imaging techniques, the established methods for evaluation of vertebral deformations are limited to measuring deformations in two-dimensional (2D) x-ray images. In this paper, we propose a method for quantitative description of vertebral body deformations by efficient modelling and segmentation of vertebral bodies in 3D. The deformations are evaluated from the parameters of a 3D superquadric model, which is initialized as an elliptical cylinder and then gradually deformed by introducing transformations that yield a more detailed representation of the vertebral body shape. After modelling the vertebral body shape with 25 clinically meaningful parameters and the vertebral body pose with six rigid body parameters, the 3D model is aligned to the observed vertebral body in the 3D image. The performance of the method was evaluated on 75 vertebrae from CT and 75 vertebrae from T2-weighted MR spine images, extracted from the thoracolumbar part of normal and pathological spines. The results show that the proposed method can be used for 3D segmentation of vertebral bodies in CT and MR images, as the proposed 3D model is able to describe both normal and pathological vertebral body deformations. The method may therefore be used for initialization of whole vertebra segmentation or for quantitative measurement of vertebral body deformations.

  16. Optimal Image Stitching for Concrete Bridge Bottom Surfaces Aided by 3d Structure Lines

    Science.gov (United States)

    Liu, Yahui; Yao, Jian; Liu, Kang; Lu, Xiaohu; Xia, Menghan

    2016-06-01

    Crack detection for bridge bottom surfaces via remote sensing techniques is undergoing a revolution in the last few years. For such applications, a large amount of images, acquired with high-resolution industrial cameras close to the bottom surfaces with some mobile platform, are required to be stitched into a wide-view single composite image. The conventional idea of stitching a panorama with the affine model or the homographic model always suffers a series of serious problems due to poor texture and out-of-focus blurring introduced by depth of field. In this paper, we present a novel method to seamlessly stitch these images aided by 3D structure lines of bridge bottom surfaces, which are extracted from 3D camera data. First, we propose to initially align each image in geometry based on its rough position and orientation acquired with both a laser range finder (LRF) and a high-precision incremental encoder, and these images are divided into several groups with the rough position and orientation data. Secondly, the 3D structure lines of bridge bottom surfaces are extracted from the 3D cloud points acquired with 3D cameras, which impose additional strong constraints on geometrical alignment of structure lines in adjacent images to perform a position and orientation optimization in each group to increase the local consistency. Thirdly, a homographic refinement between groups is applied to increase the global consistency. Finally, we apply a multi-band blending algorithm to generate a large-view single composite image as seamlessly as possible, which greatly eliminates both the luminance differences and the color deviations between images and further conceals image parallax. Experimental results on a set of representative images acquired from real bridge bottom surfaces illustrate the superiority of our proposed approaches.

  17. An open source workflow for 3D printouts of scientific data volumes

    Science.gov (United States)

    Loewe, P.; Klump, J. F.; Wickert, J.; Ludwig, M.; Frigeri, A.

    2013-12-01

    As the amount of scientific data continues to grow, researchers need new tools to help them visualize complex data. Immersive data-visualisations are helpful, yet fail to provide tactile feedback and sensory feedback on spatial orientation, as provided from tangible objects. The gap in sensory feedback from virtual objects leads to the development of tangible representations of geospatial information to solve real world problems. Examples are animated globes [1], interactive environments like tangible GIS [2], and on demand 3D prints. The production of a tangible representation of a scientific data set is one step in a line of scientific thinking, leading from the physical world into scientific reasoning and back: The process starts with a physical observation, or from a data stream generated by an environmental sensor. This data stream is turned into a geo-referenced data set. This data is turned into a volume representation which is converted into command sequences for the printing device, leading to the creation of a 3D printout. As a last, but crucial step, this new object has to be documented and linked to the associated metadata, and curated in long term repositories to preserve its scientific meaning and context. The workflow to produce tangible 3D data-prints from science data at the German Research Centre for Geosciences (GFZ) was implemented as a software based on the Free and Open Source Geoinformatics tools GRASS GIS and Paraview. The workflow was successfully validated in various application scenarios at GFZ using a RapMan printer to create 3D specimens of elevation models, geological underground models, ice penetrating radar soundings for planetology, and space time stacks for Tsunami model quality assessment. While these first pilot applications have demonstrated the feasibility of the overall approach [3], current research focuses on the provision of the workflow as Software as a Service (SAAS), thematic generalisation of information content and

  18. Audiovisual biofeedback improves image quality and reduces scan time for respiratory-gated 3D MRI

    Science.gov (United States)

    Lee, D.; Greer, P. B.; Arm, J.; Keall, P.; Kim, T.

    2014-03-01

    The purpose of this study was to test the hypothesis that audiovisual (AV) biofeedback can improve image quality and reduce scan time for respiratory-gated 3D thoracic MRI. For five healthy human subjects respiratory motion guidance in MR scans was provided using an AV biofeedback system, utilizing real-time respiratory motion signals. To investigate the improvement of respiratory-gated 3D MR images between free breathing (FB) and AV biofeedback (AV), each subject underwent two imaging sessions. Respiratory-related motion artifacts and imaging time were qualitatively evaluated in addition to the reproducibility of external (abdominal) motion. In the results, 3D MR images in AV biofeedback showed more anatomic information such as a clear distinction of diaphragm, lung lobes and sharper organ boundaries. The scan time was reduced from 401±215 s in FB to 334±94 s in AV (p-value 0.36). The root mean square variation of the displacement and period of the abdominal motion was reduced from 0.4±0.22 cm and 2.8±2.5 s in FB to 0.1±0.15 cm and 0.9±1.3 s in AV (p-value of displacement biofeedback improves image quality and reduces scan time for respiratory-gated 3D MRI. These results suggest that AV biofeedback has the potential to be a useful motion management tool in medical imaging and radiation therapy procedures.

  19. Preliminary clinical application of contrast-enhanced MR angiography using 3D time-resolved imaging of contrast kinetics(3D-TRICKS)

    Institute of Scientific and Technical Information of China (English)

    YANG Chun-shan; LIU Shi-yuan; XIAO Xiang-sheng; FENG Yun; LI Hui-min; XIAO Shan; GONG Wan-qing

    2007-01-01

    Objective: To introduce a new better contrast-enhanced MR angiographic method, named 3D time-resolved imaging of contrast kinetics (3D-TRICKS). Methods: TRICKS is a high temporal resolution (2-6 s) MR angiographic technique using a short TR(4 ms) and TE(1.5 ms), partial echo sampling, in which central part of k-space is updated more frequently than the peripheral part. TRICKS pre-contrast mask 3D images are firstly scanned, and then the bolus injecting of Gd-DTPA, 15-20 sequential 3D images are acquired. The reconstructed 3D images, subtraction of contrast 3D images with mask images, are conceptually similar to a catheter-based intra-arterial digital subtraction angiographic series(DSA). Thirty patients underwent contrast-enhanced MR angiography using 3D-TRICKS. Results: Totally 12 vertebral arteries were well displayed on TRICKS, in which 7 were normal, 1 demonstrated bilateral vertebral artery stenosis, 4 had unilateral vertebral artery stenosis and 1 was accompanied with the same lateral carotid artery bifurcation stenosis. Four cases of bilateral renal arteries were normal, 1 transplanted kidney artery showed as normal and 1 transplanted kidney artery showed stenosis. 2 cerebral arteries were normal, 1 had sagittal sinus thrombosis and 1 displayed intracranial arteriovenous malformation. 3 pulmonary arteries were normal, 1 showed pulmonary artery thrombosis and 1 revealed pulmonary sequestration's abnormal feeding artery and draining vein. One left lower limb fibrolipoma showed feeding artery. One displayed radial-ulnar artery artificial fistula stenosis. One revealed left antebrachium hemangioma. Conclusion: TRICKS can clearly delineate most body vascular system and reveal most vascular abnormality. It possesses convenience and high successful rate, which make it the first choice of displaying most vascular abnormality.

  20. Incremental Volume Rendering Algorithm for Interactive 3D Ultrasound Imaging

    Science.gov (United States)

    1991-02-01

    hidden surface removal, such effects as cutaway viewing of the 17 Rat -cache (16 samples organized as 4-ary tree) embedded in an array,1,f -f I I I I I I...70. [Stick84] Stickels, K. R., and Wann, L.S. (1984). "An Analysis of Three- Dimensional Reconstructive Echocardiography ." Ultrasound in Med. & Biol

  1. Robust extraction of the aorta and pulmonary artery from 3D MDCT image data

    Science.gov (United States)

    Taeprasartsit, Pinyo; Higgins, William E.

    2010-03-01

    Accurate definition of the aorta and pulmonary artery from three-dimensional (3D) multi-detector CT (MDCT) images is important for pulmonary applications. This work presents robust methods for defining the aorta and pulmonary artery in the central chest. The methods work on both contrast enhanced and no-contrast 3D MDCT image data. The automatic methods use a common approach employing model fitting and selection and adaptive refinement. During the occasional event that more precise vascular extraction is desired or the method fails, we also have an alternate semi-automatic fail-safe method. The semi-automatic method extracts the vasculature by extending the medial axes into a user-guided direction. A ground-truth study over a series of 40 human 3D MDCT images demonstrates the efficacy, accuracy, robustness, and efficiency of the methods.

  2. A web-based 3D medical image collaborative processing system with videoconference

    Science.gov (United States)

    Luo, Sanbi; Han, Jun; Huang, Yonggang

    2013-07-01

    Three dimension medical images have been playing an irreplaceable role in realms of medical treatment, teaching, and research. However, collaborative processing and visualization of 3D medical images on Internet is still one of the biggest challenges to support these activities. Consequently, we present a new application approach for web-based synchronized collaborative processing and visualization of 3D medical Images. Meanwhile, a web-based videoconference function is provided to enhance the performance of the whole system. All the functions of the system can be available with common Web-browsers conveniently, without any extra requirement of client installation. In the end, this paper evaluates the prototype system using 3D medical data sets, which demonstrates the good performance of our system.

  3. A virtually imaged defocused array (VIDA) for high-speed 3D microscopy.

    Science.gov (United States)

    Schonbrun, Ethan; Di Caprio, Giuseppe

    2016-10-01

    We report a method to capture a multifocus image stack based on recording multiple reflections generated by imaging through a custom etalon. The focus stack is collected in a single camera exposure and consequently the information needed for 3D reconstruction is recorded in the camera integration time, which is only 100 µs. We have used the VIDA microscope to temporally resolve the multi-lobed 3D morphology of neutrophil nuclei as they rotate and deform through a microfluidic constriction. In addition, we have constructed a 3D imaging flow cytometer and quantified the nuclear morphology of nearly a thousand white blood cells flowing at a velocity of 3 mm per second. The VIDA microscope is compact and simple to construct, intrinsically achromatic, and the field-of-view and stack number can be easily reconfigured without redesigning diffraction gratings and prisms.

  4. 3-D imaging of particle tracks in solid state nuclear track detectors

    Directory of Open Access Journals (Sweden)

    D. Wertheim

    2010-05-01

    Full Text Available It has been suggested that 3 to 5% of total lung cancer deaths in the UK may be associated with elevated radon concentration. Radon gas levels can be assessed using CR-39 plastic detectors which are often assessed by 2-D image analysis of surface images. 3-D analysis has the potential to provide information relating to the angle at which alpha particles impinge on the detector. In this study we used a "LEXT" OLS3100 confocal laser scanning microscope (Olympus Corporation, Tokyo, Japan to image tracks on five CR-39 detectors. We were able to identify several patterns of single and coalescing tracks from 3-D visualisation. Thus this method may provide a means of detailed 3-D analysis of Solid State Nuclear Track Detectors.

  5. 3-D imaging of particle tracks in solid state nuclear track detectors

    Science.gov (United States)

    Wertheim, D.; Gillmore, G.; Brown, L.; Petford, N.

    2010-05-01

    It has been suggested that 3 to 5% of total lung cancer deaths in the UK may be associated with elevated radon concentration. Radon gas levels can be assessed using CR-39 plastic detectors which are often assessed by 2-D image analysis of surface images. 3-D analysis has the potential to provide information relating to the angle at which alpha particles impinge on the detector. In this study we used a "LEXT" OLS3100 confocal laser scanning microscope (Olympus Corporation, Tokyo, Japan) to image tracks on five CR-39 detectors. We were able to identify several patterns of single and coalescing tracks from 3-D visualisation. Thus this method may provide a means of detailed 3-D analysis of Solid State Nuclear Track Detectors.

  6. 基于3D-CT、4D-CT和锥形束CT定义的非小细胞肺癌内靶区比较%Comparison of internal target volumes defined on three-dimensional CT, four-dimensional CT and cone-beam CT images of non-small-cell lung cancer

    Institute of Scientific and Technical Information of China (English)

    李奉祥; 李建彬; 马志芳; 张英杰; 邢军; 戚焕鹏; 尚东平; 余宁莎

    2014-01-01

    Objective To compare positional and volumetric differences between internal target volumes defined on three-dimensional CT (3D-CT),four-dimensional CT (4D-CT) and cone-beam CT (CBCT) images of non-small-cell lung cancer.Methods Thirty-one patients with NSCLC sequentially underwent 3D-CT and 4D-CT simulation scans of the thorax during free breathing.A 3D conformal treatment plan was created based on 3D-CT.The CBCT images were obtained in the first fraction and registered to the planning CT using the bony anatomy registration.All target volumes were contoured with the same protocol by a radiation oncologist.GTVs were contoured based on 3D-CT,maximum intensity projection (MIP) of 4D-CT and CBCT.CTV3D,ITVMIPand ITVCBCTWere defined with a margin of 7 mm accounting for microscopic disease.ITV10mm and ITV5 mm were defined based on CTV3D.ITV10 mm with a margin of 5 mm in LR,AP directions and 10 mm in CC direction,while ITV5 mm with an isotropic internal margin (IM) of 5 mm.The differences in the position,size,Dice's similarity coefficient (DSC) and inclusion relation of different volumes were compared.Results The median size ratio of ITV10 mm,ITV5mm,ITVMIPto ITVCBCTwere 2.33,1.88,1.03 respectively for tumors in the upper lobe and 2.13,1.76,1.10 respectively for tumors in the middle-lower lobe.The median DSC of ITVMIP and ITVCBCT(0.83) was greater than that of ITV10 mm and ITVcBcT (0.6) and ITV5 mm and ITVCBCT (0.66) for all patients (Z =-4.86,-4.86,P < 0.05).The median percentages of ITVCBCT not included in ITV10 mm,ITV5 mm,ITVMIPwere 0.10%,1.63% and 15.21% respectively,while the median percentage of ITV10mm,ITV5mm,ITVMIP,not included in ITVCBCT were 57.08%,48.89% and 20.04%,respectively.The median percentage of ITVCBCT not included in ITV5 mm was 1.24% for tumors in the upper lobe and 5.8% for tumors in the middle-lower lobe.Conclusions The individual ITV based on 4D-CT can't encompass the ITV based on CBCT effectively.The use of the ITV derived from 4

  7. Hybrid wide-field and scanning microscopy for high-speed 3D imaging.

    Science.gov (United States)

    Duan, Yubo; Chen, Nanguang

    2015-11-15

    Wide-field optical microscopy is efficient and robust in biological imaging, but it lacks depth sectioning. In contrast, scanning microscopic techniques, such as confocal microscopy and multiphoton microscopy, have been successfully used for three-dimensional (3D) imaging with optical sectioning capability. However, these microscopic techniques are not very suitable for dynamic real-time imaging because they usually take a long time for temporal and spatial scanning. Here, a hybrid imaging technique combining wide-field microscopy and scanning microscopy is proposed to accelerate the image acquisition process while maintaining the 3D optical sectioning capability. The performance was demonstrated by proof-of-concept imaging experiments with fluorescent beads and zebrafish liver.

  8. 3D Image Reconstruction from X-Ray Measurements with Overlap

    CERN Document Server

    Klodt, Maria

    2016-01-01

    3D image reconstruction from a set of X-ray projections is an important image reconstruction problem, with applications in medical imaging, industrial inspection and airport security. The innovation of X-ray emitter arrays allows for a novel type of X-ray scanners with multiple simultaneously emitting sources. However, two or more sources emitting at the same time can yield measurements from overlapping rays, imposing a new type of image reconstruction problem based on nonlinear constraints. Using traditional linear reconstruction methods, respective scanner geometries have to be implemented such that no rays overlap, which severely restricts the scanner design. We derive a new type of 3D image reconstruction model with nonlinear constraints, based on measurements with overlapping X-rays. Further, we show that the arising optimization problem is partially convex, and present an algorithm to solve it. Experiments show highly improved image reconstruction results from both simulated and real-world measurements.

  9. Geomorphology of Late Quaternary Mass Movement Deposits using a Decimetre-Resolution 3D Seismic Volume: Case Studies from Windermere, UK, and Trondheimsfjorden, Norway

    Science.gov (United States)

    Vardy, M. E.; Dix, J. K.; Henstock, T.; Bull, J. M.; Pinson, L.; L'Heureux, J.; Longva, O.; Hansen, L.; Chand, S.; Gutowski, M.

    2009-12-01

    We present results from decimetre resolution 3D seismic volumes acquired over Late Quaternary mass movement deposits in both Lake Windermere, UK, and the Trondheim Harbour area, central Norway. Both deposits were imaged using the 3D Chirp sub-bottom profiler, which combines the known, highly repeatable source waveform of Chirp profilers with the coherent processing and interpretation afforded by true 3D seismic volumes. Reflector morphology from these two volumes are used to identify and map structure on scales of 10s cm to 100s metres. This shows the applicability of the method for the interpretation of failure mechanism, flow morphology and depositional style in these two environments. In Windermere, Younger Dryas deposits have been substantially reworked by the episodic redistribution of sediment from the steep lakesides into the basin. Within the 100 x 400 m 3D seismic volume we identify two small debris flow deposits (1500 m3 and 60,000 m3) and one large (500,000 m3) erosive mass flow deposit. These two depositional mechanisms are distinct. The debris flows have high amplitude, chaotic internal reflections, with a high amplitude reflector representing a lower erosional boundary, discontinuous low amplitude top reflector, and thin out rapidly with distance from the lake margin. The thicker mass flow unit lacks internal structure, and has high amplitude top and base reflectors,. In the Trondheim Harbour we image the down-slope extent of three large slide blocks (which have a net volume > 1 x 106 m3), mobilised by a landslide in 1990, in the 100 x 450 m 3D seismic volume. The morphology of these mass movement deposits is distinct again; demonstrating translational failure along a clear slip plane, leaving well defined slide scars, and forming prominent compressional/extensional structures.

  10. Non-invasive single-shot 3D imaging through a scattering layer using speckle interferometry

    CERN Document Server

    Somkuwar, Atul S; R., Vinu; Park, Yongkeun; Singh, Rakesh Kumar

    2015-01-01

    Optical imaging through complex scattering media is one of the major technical challenges with important applications in many research fields, ranging from biomedical imaging, astronomical telescopy, and spatially multiplex optical communications. Although various approaches for imaging though turbid layer have been recently proposed, they had been limited to two-dimensional imaging. Here we propose and experimentally demonstrate an approach for three-dimensional single-shot imaging of objects hidden behind an opaque scattering layer. We demonstrate that under suitable conditions, it is possible to perform the 3D imaging to reconstruct the complex amplitude of objects situated at different depths.

  11. 3D early embryogenesis image filtering by nonlinear partial differential equations.

    Science.gov (United States)

    Krivá, Z; Mikula, K; Peyriéras, N; Rizzi, B; Sarti, A; Stasová, O

    2010-08-01

    We present nonlinear diffusion equations, numerical schemes to solve them and their application for filtering 3D images obtained from laser scanning microscopy (LSM) of living zebrafish embryos, with a goal to identify the optimal filtering method and its parameters. In the large scale applications dealing with analysis of 3D+time embryogenesis images, an important objective is a correct detection of the number and position of cell nuclei yielding the spatio-temporal cell lineage tree of embryogenesis. The filtering is the first and necessary step of the image analysis chain and must lead to correct results, removing the noise, sharpening the nuclei edges and correcting the acquisition errors related to spuriously connected subregions. In this paper we study such properties for the regularized Perona-Malik model and for the generalized mean curvature flow equations in the level-set formulation. A comparison with other nonlinear diffusion filters, like tensor anisotropic diffusion and Beltrami flow, is also included. All numerical schemes are based on the same discretization principles, i.e. finite volume method in space and semi-implicit scheme in time, for solving nonlinear partial differential equations. These numerical schemes are unconditionally stable, fast and naturally parallelizable. The filtering results are evaluated and compared first using the Mean Hausdorff distance between a gold standard and different isosurfaces of original and filtered data. Then, the number of isosurface connected components in a region of interest (ROI) detected in original and after the filtering is compared with the corresponding correct number of nuclei in the gold standard. Such analysis proves the robustness and reliability of the edge preserving nonlinear diffusion filtering for this type of data and lead to finding the optimal filtering parameters for the studied models and numerical schemes. Further comparisons consist in ability of splitting the very close objects which

  12. Accurate positioning for head and neck cancer patients using 2D and 3D image guidance

    Science.gov (United States)

    Kang, Hyejoo; Lovelock, Dale M.; Yorke, Ellen D.; Kriminiski, Sergey; Lee, Nancy; Amols, Howard I.

    2011-01-01

    Our goal is to determine an optimized image-guided setup by comparing setup errors determined by two-dimensional (2D) and three-dimensional (3D) image guidance for head and neck cancer (HNC) patients immobilized by customized thermoplastic masks. Nine patients received weekly imaging sessions, for a total of 54, throughout treatment. Patients were first set up by matching lasers to surface marks (initial) and then translationally corrected using manual registration of orthogonal kilovoltage (kV) radiographs with DRRs (2D-2D) on bony anatomy. A kV cone beam CT (kVCBCT) was acquired and manually registered to the simulation CT using only translations (3D-3D) on the same bony anatomy to determine further translational corrections. After treatment, a second set of kVCBCT was acquired to assess intrafractional motion. Averaged over all sessions, 2D-2D registration led to translational corrections from initial setup of 3.5 ± 2.2 (range 0–8) mm. The addition of 3D-3D registration resulted in only small incremental adjustment (0.8 ± 1.5 mm). We retrospectively calculated patient setup rotation errors using an automatic rigid-body algorithm with 6 degrees of freedom (DoF) on regions of interest (ROI) of in-field bony anatomy (mainly the C2 vertebral body). Small rotations were determined for most of the imaging sessions; however, occasionally rotations > 3° were observed. The calculated intrafractional motion with automatic registration was < 3.5 mm for eight patients, and < 2° for all patients. We conclude that daily manual 2D-2D registration on radiographs reduces positioning errors for mask-immobilized HNC patients in most cases, and is easily implemented. 3D-3D registration adds little improvement over 2D-2D registration without correcting rotational errors. We also conclude that thermoplastic masks are effective for patient immobilization. PMID:21330971

  13. FEMUR SHAPE RECOVERY FROM VOLUMETRIC IMAGES USING 3-D DEFORMABLE MODELS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    A new scheme for femur shape recovery from volumetric images using deformable models was proposed. First, prior 3-D deformable femur models are created as templates using point distribution models technology. Second, active contour models are employed to segment the magnetic resonance imaging (MRI) volumetric images of the tibial and femoral joints and the deformable models are initialized based on the segmentation results. Finally, the objective function is minimized to give the optimal results constraining the surface of shapes.

  14. Measurement of facial soft tissues thickness using 3D computed tomographic images

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Ho Gul; Kim, Kee Deog; Shin, Dong Won; Hu, Kyung Seok; Lee, Jae Bum; Park, Hyok; Park, Chang Seo [Yonsei Univ. Hospital, Seoul (Korea, Republic of); Han, Seung Ho [Catholic Univ. of Korea, Seoul (Korea, Republic of)

    2006-03-15

    To evaluate accuracy and reliability of program to measure facial soft tissue thickness using 3D computed tomographic images by comparing with direct measurement. One cadaver was scanned with a Helical CT with 3 mm slice thickness and 3 mm/sec table speed. The acquired data was reconstructed with 1.5 mm reconstruction interval and the images were transferred to a personal computer. The facial soft tissue thickness were measured using a program developed newly in 3D image. For direct measurement, the cadaver was cut with a bone cutter and then a ruler was placed above the cut side. The procedure was followed by taking pictures of the facial soft tissues with a high-resolution digital camera. Then the measurements were done in the photographic images and repeated for ten times. A repeated measure analysis of variance was adopted to compare and analyze the measurements resulting from the two different methods. Comparison according to the areas was analyzed by Mann-Whitney test. There were no statistically significant differences between the direct measurements and those using the 3D images(p>0.05). There were statistical differences in the measurements on 17 points but all the points except 2 points showed a mean difference of 0.5 mm or less. The developed software program to measure the facial soft tissue thickness using 3D images was so accurate that it allows to measure facial soft tissue thickness more easily in forensic science and anthropology.

  15. Fuzzy zoning for feature matching technique in 3D reconstruction of nasal endoscopic images.

    Science.gov (United States)

    Rattanalappaiboon, Surapong; Bhongmakapat, Thongchai; Ritthipravat, Panrasee

    2015-12-01

    3D reconstruction from nasal endoscopic images greatly supports an otolaryngologist in examining nasal passages, mucosa, polyps, sinuses, and nasopharyx. In general, structure from motion is a popular technique. It consists of four main steps; (1) camera calibration, (2) feature extraction, (3) feature matching, and (4) 3D reconstruction. Scale Invariant Feature Transform (SIFT) algorithm is normally used for both feature extraction and feature matching. However, SIFT algorithm relatively consumes computational time particularly in the feature matching process because each feature in an image of interest is compared with all features in the subsequent image in order to find the best matched pair. A fuzzy zoning approach is developed for confining feature matching area. Matching between two corresponding features from different images can be efficiently performed. With this approach, it can greatly reduce the matching time. The proposed technique is tested with endoscopic images created from phantoms and compared with the original SIFT technique in terms of the matching time and average errors of the reconstructed models. Finally, original SIFT and the proposed fuzzy-based technique are applied to 3D model reconstruction of real nasal cavity based on images taken from a rigid nasal endoscope. The results showed that the fuzzy-based approach was significantly faster than traditional SIFT technique and provided similar quality of the 3D models. It could be used for creating a nasal cavity taken by a rigid nasal endoscope.

  16. Evaluating 3D registration of CT-scan images using crest lines

    Science.gov (United States)

    Ayache, Nicholas; Gueziec, Andre P.; Thirion, Jean-Philippe; Gourdon, A.; Knoplioch, Jerome

    1993-06-01

    We consider the issue of matching 3D objects extracted from medical images. We show that crest lines computed on the object surfaces correspond to meaningful anatomical features, and that they are stable with respect to rigid transformations. We present the current chain of algorithmic modules which automatically extract the major crest lines in 3D CT-Scan images, and then use differential invariants on these lines to register together the 3D images with a high precision. The extraction of the crest lines is done by computing up to third order derivatives of the image intensity function with appropriate 3D filtering of the volumetric images, and by the 'marching lines' algorithm. The recovered lines are then approximated by splines curves, to compute at each point a number of differential invariants. Matching is finally performed by a new geometric hashing method. The whole chain is now completely automatic, and provides extremely robust and accurate results, even in the presence of severe occlusions. In this paper, we briefly describe the whole chain of processes, already presented to evaluate the accuracy of the approach on a couple of CT-scan images of a skull containing external markers.

  17. A Featureless Approach to 3D Polyhedral Building Modeling from Aerial Images

    Directory of Open Access Journals (Sweden)

    Karim Hammoudi

    2010-12-01

    Full Text Available This paper presents a model-based approach for reconstructing 3D polyhedral building models from aerial images. The proposed approach exploits some geometric and photometric properties resulting from the perspective projection of planar structures. Data are provided by calibrated aerial images. The novelty of the approach lies in its featurelessness and in its use of direct optimization based on image rawbrightness. The proposed framework avoids feature extraction and matching. The 3D polyhedral model is directly estimated by optimizing an objective function that combines an image-based dissimilarity measure and a gradient score over several aerial images. The optimization process is carried out by the Differential Evolution algorithm. The proposed approach is intended to provide more accurate 3D reconstruction than feature-based approaches. Fast 3D model rectification and updating can take advantage of the proposed method. Several results and evaluations of performance from real and synthetic images show the feasibility and robustness of the proposed approach.

  18. 3D nonrigid medical image registration using a new information theoretic measure

    Science.gov (United States)

    Li, Bicao; Yang, Guanyu; Coatrieux, Jean Louis; Li, Baosheng; Shu, Huazhong

    2015-11-01

    This work presents a novel method for the nonrigid registration of medical images based on the Arimoto entropy, a generalization of the Shannon entropy. The proposed method employed the Jensen-Arimoto divergence measure as a similarity metric to measure the statistical dependence between medical images. Free-form deformations were adopted as the transformation model and the Parzen window estimation was applied to compute the probability distributions. A penalty term is incorporated into the objective function to smooth the nonrigid transformation. The goal of registration is to optimize an objective function consisting of a dissimilarity term and a penalty term, which would be minimal when two deformed images are perfectly aligned using the limited memory BFGS optimization method, and thus to get the optimal geometric transformation. To validate the performance of the proposed method, experiments on both simulated 3D brain MR images and real 3D thoracic CT data sets were designed and performed on the open source elastix package. For the simulated experiments, the registration errors of 3D brain MR images with various magnitudes of known deformations and different levels of noise were measured. For the real data tests, four data sets of 4D thoracic CT from four patients were selected to assess the registration performance of the method, including ten 3D CT images for each 4D CT data covering an entire respiration cycle. These results were compared with the normalized cross correlation and the mutual information methods and show a slight but true improvement in registration accuracy.

  19. FIB/SEM tomography with TEM-like resolution for 3D imaging of high-pressure frozen cells.

    Science.gov (United States)

    Villinger, Clarissa; Gregorius, Heiko; Kranz, Christine; Höhn, Katharina; Münzberg, Christin; von Wichert, Götz; Mizaikoff, Boris; Wanner, Gerhard; Walther, Paul

    2012-10-01

    Focused ion beam/scanning electron microscopy (FIB/SEM) tomography is a novel powerful approach for three-dimensional (3D) imaging of biological samples. Thereby, a sample is repeatedly milled with the focused ion beam (FIB) and each newly produced block face is imaged with the scanning electron microscope (SEM). This process can be repeated ad libitum in arbitrarily small increments allowing 3D analysis of relatively large volumes such as eukaryotic cells. High-pressure freezing and freeze substitution, on the other hand, are the gold standards for electron microscopic preparation of whole cells. In this work, we combined these methods and substantially improved resolution by using the secondary electron signal for image formation. With this imaging mode, contrast is formed in a very small, well-defined area close to the newly produced surface. By using this approach, small features, so far only visible in transmission electron microscope (TEM) (e.g., the two leaflets of the membrane bi-layer, clathrin coats and cytoskeletal elements), can be resolved directly in the FIB/SEM in the 3D context of whole cells.

  20. Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

    Directory of Open Access Journals (Sweden)

    Tsap Leonid V

    2006-01-01

    Full Text Available The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell, and its state. Analysis of chromosome structure is significant in the detection of diseases, identification of chromosomal abnormalities, study of DNA structural conformation, in-depth study of chromosomal surface morphology, observation of in vivo behavior of the chromosomes over time, and in monitoring environmental gene mutations. The methodology incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.

  1. Recovery and Visualization of 3D Structure of Chromosomes from Tomographic Reconstruction Images

    Energy Technology Data Exchange (ETDEWEB)

    Babu, S; Liao, P; Shin, M C; Tsap, L V

    2004-04-28

    The objectives of this work include automatic recovery and visualization of a 3D chromosome structure from a sequence of 2D tomographic reconstruction images taken through the nucleus of a cell. Structure is very important for biologists as it affects chromosome functions, behavior of the cell and its state. Chromosome analysis is significant in the detection of deceases and in monitoring environmental gene mutations. The algorithm incorporates thresholding based on a histogram analysis with a polyline splitting algorithm, contour extraction via active contours, and detection of the 3D chromosome structure by establishing corresponding regions throughout the slices. Visualization using point cloud meshing generates a 3D surface. The 3D triangular mesh of the chromosomes provides surface detail and allows a user to interactively analyze chromosomes using visualization software.

  2. Structured light 3D tracking system for measuring motions in PET brain imaging

    DEFF Research Database (Denmark)

    Olesen, Oline Vinter; Jørgensen, Morten Rudkjær; Paulsen, Rasmus Reinhold

    2010-01-01

    Patient motion during scanning deteriorates image quality, especially for high resolution PET scanners. A new proposal for a 3D head tracking system for motion correction in high resolution PET brain imaging is set up and demonstrated. A prototype tracking system based on structured light...... with a DLP projector and a CCD camera is set up on a model of the High Resolution Research Tomograph (HRRT). Methods to reconstruct 3D point clouds of simple surfaces based on phase-shifting interferometry (PSI) are demonstrated. The projector and camera are calibrated using a simple stereo vision procedure...

  3. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    OpenAIRE

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingl...

  4. Acute Bochdalek hernia in an adult:A case report of a 3D image

    Institute of Scientific and Technical Information of China (English)

    Rejeb Imen; Chakroun-Walha Olfa; Ksibi Hichem; Nasri Abdennour; Chtara Kamilia; Chaari Adel; Rekik Noureddine

    2016-01-01

    A 61-year-old male was found to have a bilateral Bochdalek hernia on routine CT during admission for acute respiratory failure. The chest X-ray showed a left paracardiac mass having a diameter of 6 cm. This mass was initially considered as a mediastinal tumor. However, CT scan showed a bilateral large defect of the posteromedial portion of the diaphragm and mesenteric fat. 3D imaging was also useful for the stereographic perception of Bochdalek hernia. Although Bochdalek hernia is not rare, to our knowl-edge, this is the first case of Bochdalek hernia continued transverse colon observed by spiral CT 3D imaging.

  5. Quantification of gully volume using very high resolution DSM generated through 3