WorldWideScience

Sample records for integral 3d images

  1. 3D integration technologies for imaging applications

    International Nuclear Information System (INIS)

    Moor, Piet de

    2008-01-01

    The aim of this paper is to give an overview of micro-electronic technologies under development today, and how they are impacting on the radiation detection and imaging of tomorrow. After a short introduction, the different enabling technologies will be discussed. Finally, a few examples of ongoing developments at IMEC on advanced detector systems will be given

  2. Integrated optical 3D digital imaging based on DSP scheme

    Science.gov (United States)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  3. Integration of virtual and real scenes within an integral 3D imaging environment

    Science.gov (United States)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  4. Holographic Image Plane Projection Integral 3D Display, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In response to NASA's need for a 3D virtual reality environment providing scientific data visualization without special user devices, Physical Optics Corporation...

  5. 3D noise-resistant segmentation and tracking of unknown and occluded objects using integral imaging

    Science.gov (United States)

    Aloni, Doron; Jung, Jae-Hyun; Yitzhaky, Yitzhak

    2017-10-01

    Three dimensional (3D) object segmentation and tracking can be useful in various computer vision applications, such as: object surveillance for security uses, robot navigation, etc. We present a method for 3D multiple-object tracking using computational integral imaging, based on accurate 3D object segmentation. The method does not employ object detection by motion analysis in a video as conventionally performed (such as background subtraction or block matching). This means that the movement properties do not significantly affect the detection quality. The object detection is performed by analyzing static 3D image data obtained through computational integral imaging With regard to previous works that used integral imaging data in such a scenario, the proposed method performs the 3D tracking of objects without prior information about the objects in the scene, and it is found efficient under severe noise conditions.

  6. 3D stereotaxis for epileptic foci through integrating MR imaging with neurological electrophysiology data

    International Nuclear Information System (INIS)

    Luo Min; Peng Chenglin; Wang Kang; Lei Wenyong; Luo Song; Wang Xiaolin; Wang Xuejian; Wu Ruoqiu; Wu Guofeng

    2005-01-01

    Objective: To improve the accuracy of the epilepsy diagnoses by integrating MR image from PACS with data from neurological electrophysiology. The integration is also very important for transmiting diagnostic information to 3D TPS of radiotherapy. Methods: The electroencephalogram was redisplayed by EEG workstation, while MR image was reconstructed by Brainvoyager software. 3D model of patient brain was built up by combining reconstructed images with electroencephalogram data in Base 2000. 30 epileptic patients (18 males and 12 females) with their age ranged from 12 to 54 years were confirmed by using the integrated MR images and the data from neurological electrophysiology and their 3D stereolocating. Results: The corresponding data in 3D model could show the real situation of patients' brain and visually locate the precise position of the focus. The suddessful rate of 3D guided operation was greatly improved, and the number of epileptic onset was markedly decreased. The epilepsy was stopped for 6 months in 8 of the 30 patients. Conclusion: The integration of MR image and information of neurological electrophysiology can improve the diagnostic level for epilepsy, and it is crucial for imp roving the successful rate of manipulations and the epilepsy analysis. (authors)

  7. Flatbed-type 3D display systems using integral imaging method

    Science.gov (United States)

    Hirayama, Yuzo; Nagatani, Hiroyuki; Saishu, Tatsuo; Fukushima, Rieko; Taira, Kazuki

    2006-10-01

    We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-D images that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-D images on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-D images to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-D images. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.

  8. An integrated 3-D image of cerebral blood vessels and CT view of tumor

    International Nuclear Information System (INIS)

    Suetens, P.; Baert, A.L.; Gybels, J.; Haegemans, S.; Jansen, P.; Oosterlinck, A.; Wilms, G.

    1984-01-01

    The authors developed a method that yields an integrated three-dimensional image of cerebral blood vessels and CT view of tumor. This method allows the neurosurgeon to choose any electrode trajectory that looks convenient to him, without imminent danger of causing a hemorrhage. Besides offering more safety to stereotactic interventions, this integrated 3-D image also has other applications. First, it gives a better characterization of most focal mass lesions seen by CT. Second, it allows high dose focal irradiation to be effected in such a way as to avoid arteries and veins. Third, it provides useful information for planning the strategy of open surgery

  9. 3D integration technology for hybrid pixel detectors designed for particle physics and imaging experiments

    International Nuclear Information System (INIS)

    Henry, D.; Berthelot, A.; Cuchet, R.; Chantre, C.; Campbell, M.; Tick, T.

    2012-01-01

    Hybrid pixel detectors are now widely used in particle physics experiments and are becoming established at synchrotron light sources. They have also stimulated growing interest in other fields and, in particular, in medical imaging. Through the continuous pursuit of miniaturization in CMOS it has been possible to increase the functionality per pixel while maintaining or even shrinking pixel dimensions. The main constraint on the more extensive use of the technology in all fields is the cost of module building and the difficulty of covering large areas seamlessly. On another hand, in the field of electronic component integration, a new approach has been developed in the last years, called 3D Integration. This concept, based on using the vertical axis for component integration, allows improving the global performance of complex systems. Thanks to this technology, the cost and the form factor of components could be decreased and the performance of the global system could be enhanced. In the field of radiation imaging detectors the advantages of 3D Integration come from reduced inter chip dead area even on large surfaces and from improved detector construction yield resulting from the use of single chip 4-side buttable tiles. For many years, numerous R and centres and companies have put a lot of effort into developing 3D integration technologies and today, some mature technologies are ready for prototyping and production. The core technology of the 3D integration is the TSV (Through Silicon Via) and for many years, LETI has developed those technologies for various types of applications. In this paper we present how one of the TSV approaches developed by LETI, called TSV last, has been applied to a readout wafer containing readout chips intended for a hybrid pixel detector assembly. In the first part of this paper, the 3D design adapted to the read-out chip will be described. Then the complete process flow will be explained and, finally, the test strategy adopted and

  10. Integration of 3D anatomical data obtained by CT imaging and 3D optical scanning for computer aided implant surgery

    Directory of Open Access Journals (Sweden)

    Paoli Alessandro

    2011-02-01

    Full Text Available Abstract Background A precise placement of dental implants is a crucial step to optimize both prosthetic aspects and functional constraints. In this context, the use of virtual guiding systems has been recognized as a fundamental tool to control the ideal implant position. In particular, complex periodontal surgeries can be performed using preoperative planning based on CT data. The critical point of the procedure relies on the lack of accuracy in transferring CT planning information to surgical field through custom-made stereo-lithographic surgical guides. Methods In this work, a novel methodology is proposed for monitoring loss of accuracy in transferring CT dental information into periodontal surgical field. The methodology is based on integrating 3D data of anatomical (impression and cast and preoperative (radiographic template models, obtained by both CT and optical scanning processes. Results A clinical case, relative to a fully edentulous jaw patient, has been used as test case to assess the accuracy of the various steps concurring in manufacturing surgical guides. In particular, a surgical guide has been designed to place implants in the bone structure of the patient. The analysis of the results has allowed the clinician to monitor all the errors, which have been occurring step by step manufacturing the physical templates. Conclusions The use of an optical scanner, which has a higher resolution and accuracy than CT scanning, has demonstrated to be a valid support to control the precision of the various physical models adopted and to point out possible error sources. A case study regarding a fully edentulous patient has confirmed the feasibility of the proposed methodology.

  11. Integration of 3D anatomical data obtained by CT imaging and 3D optical scanning for computer aided implant surgery

    Science.gov (United States)

    2011-01-01

    Background A precise placement of dental implants is a crucial step to optimize both prosthetic aspects and functional constraints. In this context, the use of virtual guiding systems has been recognized as a fundamental tool to control the ideal implant position. In particular, complex periodontal surgeries can be performed using preoperative planning based on CT data. The critical point of the procedure relies on the lack of accuracy in transferring CT planning information to surgical field through custom-made stereo-lithographic surgical guides. Methods In this work, a novel methodology is proposed for monitoring loss of accuracy in transferring CT dental information into periodontal surgical field. The methodology is based on integrating 3D data of anatomical (impression and cast) and preoperative (radiographic template) models, obtained by both CT and optical scanning processes. Results A clinical case, relative to a fully edentulous jaw patient, has been used as test case to assess the accuracy of the various steps concurring in manufacturing surgical guides. In particular, a surgical guide has been designed to place implants in the bone structure of the patient. The analysis of the results has allowed the clinician to monitor all the errors, which have been occurring step by step manufacturing the physical templates. Conclusions The use of an optical scanner, which has a higher resolution and accuracy than CT scanning, has demonstrated to be a valid support to control the precision of the various physical models adopted and to point out possible error sources. A case study regarding a fully edentulous patient has confirmed the feasibility of the proposed methodology. PMID:21338504

  12. Design and implementation of Gm-APD array readout integrated circuit for infrared 3D imaging

    Science.gov (United States)

    Zheng, Li-xia; Yang, Jun-hao; Liu, Zhao; Dong, Huai-peng; Wu, Jin; Sun, Wei-feng

    2013-09-01

    A single-photon detecting array of readout integrated circuit (ROIC) capable of infrared 3D imaging by photon detection and time-of-flight measurement is presented in this paper. The InGaAs avalanche photon diodes (APD) dynamic biased under Geiger operation mode by gate controlled active quenching circuit (AQC) are used here. The time-of-flight is accurately measured by a high accurate time-to-digital converter (TDC) integrated in the ROIC. For 3D imaging, frame rate controlling technique is utilized to the pixel's detection, so that the APD related to each pixel should be controlled by individual AQC to sense and quench the avalanche current, providing a digital CMOS-compatible voltage pulse. After each first sense, the detector is reset to wait for next frame operation. We employ counters of a two-segmental coarse-fine architecture, where the coarse conversion is achieved by a 10-bit pseudo-random linear feedback shift register (LFSR) in each pixel and a 3-bit fine conversion is realized by a ring delay line shared by all pixels. The reference clock driving the LFSR counter can be generated within the ring delay line Oscillator or provided by an external clock source. The circuit is designed and implemented by CSMC 0.5μm standard CMOS technology and the total chip area is around 2mm×2mm for 8×8 format ROIC with 150μm pixel pitch. The simulation results indicate that the relative time resolution of the proposed ROIC can achieve less than 1ns, and the preliminary test results show that the circuit function is correct.

  13. Integration of knowledge to support automatic object reconstruction from images and 3D data

    International Nuclear Information System (INIS)

    Boochs, F.; Truong, H; Marbs, A.; Karmacharya, A.; Cruz, C.; Habed, A.; Nicolle, C.; Voisin, Y.

    2011-01-01

    Object reconstruction is a important task in many fields of application as it allows to generate digital representations of our physical world used as base for analysis, planning, construction, visualization or other aims. A reconstruction itself normally is based on reliable data (images, 3D point clouds for example) expressing the object in his complete extension. This data then has to be compiled and analyzed in order to extract all necessary geometrical elements, which represent the object and form a digital copy of it. Traditional strategies are largely based on manual interaction and interpretation, because with increasing complexity of objects human understanding is inevitable to achieve acceptable and reliable results. But human interaction is time consuming and expensive, why many research has already been invested to integrate algorithmic support, what allows to speed up the process and reduce manual work load. Presently most such algorithms are data-driven and concentrate on specific features of the objects, being accessible to numerical models. By means of these models, which normally will represent geometrical (flatness, roughness, for example) or physical features (color, texture), the data is classified and analyzed. This is succesful for objects with a limited complexity, but gets to its limits with increasing complexity of objects. Then purely numerical strategies are not able to sufficiently model the reality. Therefore, the intention of our approach is to take human cogni-tive strategy as an example, and to simulate extraction processes based on available knowledge for the objects of interest. Such processes will introduce a semantic structure for the objects and guide the algorithms used to detect and recognize objects, which will yield a higher effectiveness. Hence, our research proposes an approach using knowledge to guide the algorithms in 3D point cloud and image processing.

  14. Surgical Navigation Technology Based on Augmented Reality and Integrated 3D Intraoperative Imaging

    Science.gov (United States)

    Elmi-Terander, Adrian; Skulason, Halldor; Söderman, Michael; Racadio, John; Homan, Robert; Babic, Drazenko; van der Vaart, Nijs; Nachabe, Rami

    2016-01-01

    Study Design. A cadaveric laboratory study. Objective. The aim of this study was to assess the feasibility and accuracy of thoracic pedicle screw placement using augmented reality surgical navigation (ARSN). Summary of Background Data. Recent advances in spinal navigation have shown improved accuracy in lumbosacral pedicle screw placement but limited benefits in the thoracic spine. 3D intraoperative imaging and instrument navigation may allow improved accuracy in pedicle screw placement, without the use of x-ray fluoroscopy, and thus opens the route to image-guided minimally invasive therapy in the thoracic spine. Methods. ARSN encompasses a surgical table, a motorized flat detector C-arm with intraoperative 2D/3D capabilities, integrated optical cameras for augmented reality navigation, and noninvasive patient motion tracking. Two neurosurgeons placed 94 pedicle screws in the thoracic spine of four cadavers using ARSN on one side of the spine (47 screws) and free-hand technique on the contralateral side. X-ray fluoroscopy was not used for either technique. Four independent reviewers assessed the postoperative scans, using the Gertzbein grading. Morphometric measurements of the pedicles axial and sagittal widths and angles, as well as the vertebrae axial and sagittal rotations were performed to identify risk factors for breaches. Results. ARSN was feasible and superior to free-hand technique with respect to overall accuracy (85% vs. 64%, P dimensions, except for vertebral body axial rotation, were risk factors for larger breaches when performed with the free-hand method. Conclusion. ARSN without fluoroscopy was feasible and demonstrated higher accuracy than free-hand technique for thoracic pedicle screw placement. Level of Evidence: N/A PMID:27513166

  15. INTEGRATION OF VIDEO IMAGES AND CAD WIREFRAMES FOR 3D OBJECT LOCALIZATION

    Directory of Open Access Journals (Sweden)

    R. A. Persad

    2012-07-01

    Full Text Available The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT. Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  16. 3D BUILDING RECONSTRUCTION BY MULTIVIEW IMAGES AND THE INTEGRATED APPLICATION WITH AUGMENTED REALITY

    Directory of Open Access Journals (Sweden)

    J.-T. Hwang

    2016-10-01

    Full Text Available This study presents an approach wherein photographs with a high degree of overlap are clicked using a digital camera and used to generate three-dimensional (3D point clouds via feature point extraction and matching. To reconstruct a building model, an unmanned aerial vehicle (UAV is used to click photographs from vertical shooting angles above the building. Multiview images are taken from the ground to eliminate the shielding effect on UAV images caused by trees. Point clouds from the UAV and multiview images are generated via Pix4Dmapper. By merging two sets of point clouds via tie points, the complete building model is reconstructed. The 3D models are reconstructed using AutoCAD 2016 to generate vectors from the point clouds; SketchUp Make 2016 is used to rebuild a complete building model with textures. To apply 3D building models in urban planning and design, a modern approach is to rebuild the digital models; however, replacing the landscape design and building distribution in real time is difficult as the frequency of building replacement increases. One potential solution to these problems is augmented reality (AR. Using Unity3D and Vuforia to design and implement the smartphone application service, a markerless AR of the building model can be built. This study is aimed at providing technical and design skills related to urban planning, urban designing, and building information retrieval using AR.

  17. Advanced 3-D Ultrasound Imaging

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer

    The main purpose of the PhD project was to develop methods that increase the 3-D ultrasound imaging quality available for the medical personnel in the clinic. Acquiring a 3-D volume gives the medical doctor the freedom to investigate the measured anatomy in any slice desirable after the scan has...... been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D imaging. A limiting factor has traditionally been the low image quality achievable using...... a channel limited 2-D transducer array and the conventional 3-D beamforming technique, Parallel Beamforming. The first part of the scientific contributions demonstrate that 3-D synthetic aperture imaging achieves a better image quality than the Parallel Beamforming technique. Data were obtained using both...

  18. Handbook of 3D integration

    CERN Document Server

    Garrou , Philip; Ramm , Peter

    2014-01-01

    Edited by key figures in 3D integration and written by top authors from high-tech companies and renowned research institutions, this book covers the intricate details of 3D process technology.As such, the main focus is on silicon via formation, bonding and debonding, thinning, via reveal and backside processing, both from a technological and a materials science perspective. The last part of the book is concerned with assessing and enhancing the reliability of the 3D integrated devices, which is a prerequisite for the large-scale implementation of this emerging technology. Invaluable reading fo

  19. 3D vector flow imaging

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes

    The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... are (vx, vy, vz) = (-0.03, 95, 1.0) ± (9, 6, 1) cm/s compared with the expected (0, 96, 0) cm/s. Afterwards, 3D vector flow images from a cross-sectional plane of the vessel are presented. The out of plane velocities exhibit the expected 2D circular-symmetric parabolic shape. The experimental results...... verify that the 3D TO method estimates the complete 3D velocity vectors, and that the method is suitable for 3D vector flow imaging....

  20. 3D integrated superconducting qubits

    Science.gov (United States)

    Rosenberg, D.; Kim, D.; Das, R.; Yost, D.; Gustavsson, S.; Hover, D.; Krantz, P.; Melville, A.; Racz, L.; Samach, G. O.; Weber, S. J.; Yan, F.; Yoder, J. L.; Kerman, A. J.; Oliver, W. D.

    2017-10-01

    As the field of quantum computing advances from the few-qubit stage to larger-scale processors, qubit addressability and extensibility will necessitate the use of 3D integration and packaging. While 3D integration is well-developed for commercial electronics, relatively little work has been performed to determine its compatibility with high-coherence solid-state qubits. Of particular concern, qubit coherence times can be suppressed by the requisite processing steps and close proximity of another chip. In this work, we use a flip-chip process to bond a chip with superconducting flux qubits to another chip containing structures for qubit readout and control. We demonstrate that high qubit coherence (T1, T2,echo > 20 μs) is maintained in a flip-chip geometry in the presence of galvanic, capacitive, and inductive coupling between the chips.

  1. Integration of multi-modality imaging for accurate 3D reconstruction of human coronary arteries in vivo

    International Nuclear Information System (INIS)

    Giannoglou, George D.; Chatzizisis, Yiannis S.; Sianos, George; Tsikaderis, Dimitrios; Matakos, Antonis; Koutkias, Vassilios; Diamantopoulos, Panagiotis; Maglaveras, Nicos; Parcharidis, George E.; Louridas, George E.

    2006-01-01

    In conventional intravascular ultrasound (IVUS)-based three-dimensional (3D) reconstruction of human coronary arteries, IVUS images are arranged linearly generating a straight vessel volume. However, with this approach real vessel curvature is neglected. To overcome this limitation an imaging method was developed based on integration of IVUS and biplane coronary angiography (BCA). In 17 coronary arteries from nine patients, IVUS and BCA were performed. From each angiographic projection, a single end-diastolic frame was selected and in each frame the IVUS catheter was interactively detected for the extraction of 3D catheter path. Ultrasound data was obtained with a sheath-based catheter and recorded on S-VHS videotape. S-VHS data was digitized and lumen and media-adventitia contours were semi-automatically detected in end-diastolic IVUS images. Each pair of contours was aligned perpendicularly to the catheter path and rotated in space by implementing an algorithm based on Frenet-Serret rules. Lumen and media-adventitia contours were interpolated through generation of intermediate contours creating a real 3D lumen and vessel volume, respectively. The absolute orientation of the reconstructed lumen was determined by back-projecting it onto both angiographic planes and comparing the projected lumen with the actual angiographic lumen. In conclusion, our method is capable of performing rapid and accurate 3D reconstruction of human coronary arteries in vivo. This technique can be utilized for reliable plaque morphometric, geometrical and hemodynamic analyses

  2. A high-frequency transimpedance amplifier for CMOS integrated 2D CMUT array towards 3D ultrasound imaging.

    Science.gov (United States)

    Huang, Xiwei; Cheong, Jia Hao; Cha, Hyouk-Kyu; Yu, Hongbin; Je, Minkyu; Yu, Hao

    2013-01-01

    One transimpedance amplifier based CMOS analog front-end (AFE) receiver is integrated with capacitive micromachined ultrasound transducers (CMUTs) towards high frequency 3D ultrasound imaging. Considering device specifications from CMUTs, the TIA is designed to amplify received signals from 17.5MHz to 52.5MHz with center frequency at 35MHz; and is fabricated in Global Foundry 0.18-µm 30-V high-voltage (HV) Bipolar/CMOS/DMOS (BCD) process. The measurement results show that the TIA with power-supply 6V can reach transimpedance gain of 61dBΩ and operating frequency from 17.5MHz to 100MHz. The measured input referred noise is 27.5pA/√Hz. Acoustic pulse-echo testing is conducted to demonstrate the receiving functionality of the designed 3D ultrasound imaging system.

  3. A 3D imaging system integrating photoacoustic and fluorescence orthogonal projections for anatomical, functional and molecular assessment of rodent models

    Science.gov (United States)

    Brecht, Hans P.; Ivanov, Vassili; Dumani, Diego S.; Emelianov, Stanislav Y.; Anastasio, Mark A.; Ermilov, Sergey A.

    2018-03-01

    We have developed a preclinical 3D imaging instrument integrating photoacoustic tomography and fluorescence (PAFT) addressing known deficiencies in sensitivity and spatial resolution of the individual imaging components. PAFT is designed for simultaneous acquisition of photoacoustic and fluorescence orthogonal projections at each rotational position of a biological object, enabling direct registration of the two imaging modalities. Orthogonal photoacoustic projections are utilized to reconstruct large (21 cm3 ) volumes showing vascularized anatomical structures and regions of induced optical contrast with spatial resolution exceeding 100 µm. The major advantage of orthogonal fluorescence projections is significant reduction of background noise associated with transmitted or backscattered photons. The fluorescence imaging component of PAFT is used to boost detection sensitivity by providing low-resolution spatial constraint for the fluorescent biomarkers. PAFT performance characteristics were assessed by imaging optical and fluorescent contrast agents in tissue mimicking phantoms and in vivo. The proposed PAFT technology will enable functional and molecular volumetric imaging using fluorescent biomarkers, nanoparticles, and other photosensitive constructs mapped with high fidelity over robust anatomical structures, such as skin, central and peripheral vasculature, and internal organs.

  4. Analysis of 3-D images

    Science.gov (United States)

    Wani, M. Arif; Batchelor, Bruce G.

    1992-03-01

    Deriving generalized representation of 3-D objects for analysis and recognition is a very difficult task. Three types of representations based on type of an object is used in this paper. Objects which have well-defined geometrical shapes are segmented by using a fast edge region based segmentation technique. The segmented image is represented by plan and elevation of each part of the object if the object parts are symmetrical about their central axis. The plan and elevation concept enables representing and analyzing such objects quickly and efficiently. The second type of representation is used for objects having parts which are not symmetrical about their central axis. The segmented surface patches of such objects are represented by the 3-D boundary and the surface features of each segmented surface. Finally, the third type of representation is used for objects which don't have well-defined geometrical shapes (for example a loaf of bread). These objects are represented and analyzed from its features which are derived using a multiscale contour based technique. Anisotropic Gaussian smoothing technique is introduced to segment the contours at various scales of smoothing. A new merging technique is used which enables getting the current best estimate of break points at each scale. This new technique enables elimination of loss of accuracy of localization effects at coarser scales without using scale space tracking approach.

  5. 3-D Imaging Using Row-Column-Addressed Arrays With Integrated Apodization

    DEFF Research Database (Denmark)

    Christiansen, Thomas Lehrmann; Rasmussen, Morten Fischer; Bagge, Jan Peter

    2015-01-01

    Pa, and the sensitivity was 0.299 ± 0.090 V/Pa. The nearest neighbor crosstalk level was -23.9 ± 3.7 dB, while the transmit-to-receive-elements crosstalk level was -40.2 ± 3.5 dB. Imaging of a 0.3-mm-diameter steel wire using synthetic transmit focusing with 62 single-element emissions demonstrated axial and lateral...

  6. Integrated light-sheet imaging and flow-based enquiry (iLIFE) system for 3D in-vivo imaging of multicellular organism

    Science.gov (United States)

    Rasmi, Chelur K.; Padmanabhan, Sreedevi; Shirlekar, Kalyanee; Rajan, Kanhirodan; Manjithaya, Ravi; Singh, Varsha; Mondal, Partha Pratim

    2017-12-01

    We propose and demonstrate a light-sheet-based 3D interrogation system on a microfluidic platform for screening biological specimens during flow. To achieve this, a diffraction-limited light-sheet (with a large field-of-view) is employed to optically section the specimens flowing through the microfluidic channel. This necessitates optimization of the parameters for the illumination sub-system (illumination intensity, light-sheet width, and thickness), microfluidic specimen platform (channel-width and flow-rate), and detection sub-system (camera exposure time and frame rate). Once optimized, these parameters facilitate cross-sectional imaging and 3D reconstruction of biological specimens. The proposed integrated light-sheet imaging and flow-based enquiry (iLIFE) imaging technique enables single-shot sectional imaging of a range of specimens of varying dimensions, ranging from a single cell (HeLa cell) to a multicellular organism (C. elegans). 3D reconstruction of the entire C. elegans is achieved in real-time and with an exposure time of few hundred micro-seconds. A maximum likelihood technique is developed and optimized for the iLIFE imaging system. We observed an intracellular resolution for mitochondria-labeled HeLa cells, which demonstrates the dynamic resolution of the iLIFE system. The proposed technique is a step towards achieving flow-based 3D imaging. We expect potential applications in diverse fields such as structural biology and biophysics.

  7. Surgical Navigation Technology Based on Augmented Reality and Integrated 3D Intraoperative Imaging: A Spine Cadaveric Feasibility and Accuracy Study.

    Science.gov (United States)

    Elmi-Terander, Adrian; Skulason, Halldor; Söderman, Michael; Racadio, John; Homan, Robert; Babic, Drazenko; van der Vaart, Nijs; Nachabe, Rami

    2016-11-01

    A cadaveric laboratory study. The aim of this study was to assess the feasibility and accuracy of thoracic pedicle screw placement using augmented reality surgical navigation (ARSN). Recent advances in spinal navigation have shown improved accuracy in lumbosacral pedicle screw placement but limited benefits in the thoracic spine. 3D intraoperative imaging and instrument navigation may allow improved accuracy in pedicle screw placement, without the use of x-ray fluoroscopy, and thus opens the route to image-guided minimally invasive therapy in the thoracic spine. ARSN encompasses a surgical table, a motorized flat detector C-arm with intraoperative 2D/3D capabilities, integrated optical cameras for augmented reality navigation, and noninvasive patient motion tracking. Two neurosurgeons placed 94 pedicle screws in the thoracic spine of four cadavers using ARSN on one side of the spine (47 screws) and free-hand technique on the contralateral side. X-ray fluoroscopy was not used for either technique. Four independent reviewers assessed the postoperative scans, using the Gertzbein grading. Morphometric measurements of the pedicles axial and sagittal widths and angles, as well as the vertebrae axial and sagittal rotations were performed to identify risk factors for breaches. ARSN was feasible and superior to free-hand technique with respect to overall accuracy (85% vs. 64%, P dimensions, except for vertebral body axial rotation, were risk factors for larger breaches when performed with the free-hand method. ARSN without fluoroscopy was feasible and demonstrated higher accuracy than free-hand technique for thoracic pedicle screw placement. N/A.

  8. Gabor-domain optical coherence microscopy with integrated dual-axis MEMS scanner for fast 3D imaging and metrology

    Science.gov (United States)

    Canavesi, Cristina; Cogliati, Andrea; Hayes, Adam; Santhanam, Anand P.; Tankam, Patrice; Rolland, Jannick P.

    2015-10-01

    Fast, robust, nondestructive 3D imaging is needed for characterization of microscopic structures in industrial and clinical applications. A custom micro-electromechanical system (MEMS)-based 2D scanner system was developed to achieve 55 kHz A-scan acquisition in a Gabor-domain optical coherence microscopy (GD-OCM) instrument with a novel multilevel GPU architecture for high-speed imaging. GD-OCM yields high-definition volumetric imaging with dynamic depth of focusing through a bio-inspired liquid lens-based microscope design, which has no moving parts and is suitable for use in a manufacturing setting or in a medical environment. A dual-axis MEMS mirror was chosen to replace two single-axis galvanometer mirrors; as a result, the astigmatism caused by the mismatch between the optical pupil and the scanning location was eliminated and a 12x reduction in volume of the scanning system was achieved. Imaging at an invariant resolution of 2 μm was demonstrated throughout a volume of 1 × 1 × 0.6 mm3, acquired in less than 2 minutes. The MEMS-based scanner resulted in improved image quality, increased robustness and lighter weight of the system - all factors that are critical for on-field deployment. A custom integrated feedback system consisting of a laser diode and a position-sensing detector was developed to investigate the impact of the resonant frequency of the MEMS and the driving signal of the scanner on the movement of the mirror. Results on the metrology of manufactured materials and characterization of tissue samples with GD-OCM are presented.

  9. 3-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon

    , if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges...... ultrasonic vector flow estimation and bring it a step closer to a clinical application. A method for high frame rate 3-D vector flow estimation in a plane using the transverse oscillation method combined with a 1024 channel 2-D matrix array is presented. The proposed method is validated both through phantom...... hampers the task of real-time processing. In a second study, some of the issue with the 2-D matrix array are solved by introducing a 2-D row-column (RC) addressing array with only 62 + 62 elements. It is investigated both through simulations and via experimental setups in various flow conditions...

  10. 3D Integration for Wireless Multimedia

    Science.gov (United States)

    Kimmich, Georg

    The convergence of mobile phone, internet, mapping, gaming and office automation tools with high quality video and still imaging capture capability is becoming a strong market trend for portable devices. High-density video encode and decode, 3D graphics for gaming, increased application-software complexity and ultra-high-bandwidth 4G modem technologies are driving the CPU performance and memory bandwidth requirements close to the PC segment. These portable multimedia devices are battery operated, which requires the deployment of new low-power-optimized silicon process technologies and ultra-low-power design techniques at system, architecture and device level. Mobile devices also need to comply with stringent silicon-area and package-volume constraints. As for all consumer devices, low production cost and fast time-to-volume production is key for success. This chapter shows how 3D architectures can bring a possible breakthrough to meet the conflicting power, performance and area constraints. Multiple 3D die-stacking partitioning strategies are described and analyzed on their potential to improve the overall system power, performance and cost for specific application scenarios. Requirements and maturity of the basic process-technology bricks including through-silicon via (TSV) and die-to-die attachment techniques are reviewed. Finally, we highlight new challenges which will arise with 3D stacking and an outlook on how they may be addressed: Higher power density will require thermal design considerations, new EDA tools will need to be developed to cope with the integration of heterogeneous technologies and to guarantee signal and power integrity across the die stack. The silicon/wafer test strategies have to be adapted to handle high-density IO arrays, ultra-thin wafers and provide built-in self-test of attached memories. New standards and business models have to be developed to allow cost-efficient assembly and testing of devices from different silicon and technology

  11. 3D images and expert system

    International Nuclear Information System (INIS)

    Hasegawa, Jun-ichi

    1998-01-01

    This paper presents an expert system called 3D-IMPRESS for supporting applications of three dimensional (3D) image processing. This system can automatically construct a 3D image processing procedure based on a pictorial example of the goal given by a user. In the paper, to evaluate the performance of the system, it was applied to construction of procedures for extracting specific component figures from practical chest X-ray CT images. (author)

  12. 3D molecular imaging SIMS

    Energy Technology Data Exchange (ETDEWEB)

    Gillen, Greg [Surface and Microanalysis Science Division, National Institute of Standards and Technology, Gaithersburg, MD 20899-8371 (United States)]. E-mail: Greg.gillen@nist.gov; Fahey, Albert [Surface and Microanalysis Science Division, National Institute of Standards and Technology, Gaithersburg, MD 20899-8371 (United States); Wagner, Matt [Surface and Microanalysis Science Division, National Institute of Standards and Technology, Gaithersburg, MD 20899-8371 (United States); Mahoney, Christine [Surface and Microanalysis Science Division, National Institute of Standards and Technology, Gaithersburg, MD 20899-8371 (United States)

    2006-07-30

    Thin monolayer and bilayer films of spin cast poly(methyl methacrylate) (PMMA), poly(2-hydroxyethyl methacrylate) (PHEMA), poly(lactic) acid (PLA) and PLA doped with several pharmaceuticals have been analyzed by dynamic SIMS using SF{sub 5} {sup +} polyatomic primary ion bombardment. Each of these systems exhibited minimal primary beam-induced degradation under cluster ion bombardment allowing molecular depth profiles to be obtained through the film. By combing secondary ion imaging with depth profiling, three-dimensional molecular image depth profiles have been obtained from these systems. In another approach, bevel cross-sections are cut in the samples with the SF{sub 5} {sup +} primary ion beam to produce a laterally magnified cross-section of the sample that does not contain the beam-induced damage that would be induced by conventional focussed ion beam (FIB) cross-sectioning. The bevel surface can then be examined using cluster SIMS imaging or other appropriate microanalysis technique.

  13. 3D Backscatter Imaging System

    Science.gov (United States)

    Whitaker, Ross (Inventor); Turner, D. Clark (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  14. 3D composite image, 3D MRI, 3D SPECT, hydrocephalus

    International Nuclear Information System (INIS)

    Mito, T.; Shibata, I.; Sugo, N.; Takano, M.; Takahashi, H.

    2002-01-01

    The three-dimensional (3D)SPECT imaging technique we have studied and published for the past several years is an analytical tool that permits visual expression of the cerebral circulation profile in various cerebral diseases. The greatest drawback of SPECT is that the limitation on precision of spacial resolution makes intracranial localization impossible. In 3D SPECT imaging, intracranial volume and morphology may vary with the threshold established. To solve this problem, we have produced complimentarily combined SPECT and helical-CT 3D images by means of general-purpose visualization software for intracranial localization. In hydrocephalus, however, the key subject to be studied is the profile of cerebral circulation around the ventricles of the brain. This suggests that, for displaying the cerebral ventricles in three dimensions, CT is a difficult technique whereas MRI is more useful. For this reason, we attempted to establish the profile of cerebral circulation around the cerebral ventricles by the production of combined 3D images of SPECT and MRI. In patients who had shunt surgery for hydrocephalus, a difference between pre- and postoperative cerebral circulation profiles was assessed by a voxel distribution curve, 3D SPECT images, and combined 3D SPECT and MRI images. As the shunt system in this study, an Orbis-Sigma valve of the automatic cerebrospinal fluid volume adjustment type was used in place of the variable pressure type Medos valve currently in use, because this device requires frequent changes in pressure and a change in pressure may be detected after MRI procedure. The SPECT apparatus used was PRISM3000 of the three-detector type, and 123I-IMP was used as the radionuclide in a dose of 222 MBq. MRI data were collected with an MAGNEXa+2 with a magnetic flux density of 0.5 tesla under the following conditions: field echo; TR 50 msec; TE, 10 msec; flip, 30ueK; 1 NEX; FOV, 23 cm; 1-mm slices; and gapless. 3D images are produced on the workstation TITAN

  15. 3D Membrane Imaging and Porosity Visualization

    KAUST Repository

    Sundaramoorthi, Ganesh; Hadwiger, Markus; Ben Romdhane, Mohamed; Behzad, Ali Reza; Madhavan, Poornima; Nunes, Suzana Pereira

    2016-01-01

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore

  16. 3D Reconstruction of NMR Images

    Directory of Open Access Journals (Sweden)

    Peter Izak

    2007-01-01

    Full Text Available This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  17. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  18. 3D EIT image reconstruction with GREIT.

    Science.gov (United States)

    Grychtol, Bartłomiej; Müller, Beat; Adler, Andy

    2016-06-01

    Most applications of thoracic EIT use a single plane of electrodes on the chest from which a transverse image 'slice' is calculated. However, interpretation of EIT images is made difficult by the large region above and below the electrode plane to which EIT is sensitive. Volumetric EIT images using two (or more) electrode planes should help compensate, but are little used currently. The Graz consensus reconstruction algorithm for EIT (GREIT) has become popular in lung EIT. One shortcoming of the original formulation of GREIT is its restriction to reconstruction onto a 2D planar image. We present an extension of the GREIT algorithm to 3D and develop open-source tools to evaluate its performance as a function of the choice of stimulation and measurement pattern. Results show 3D GREIT using two electrode layers has significantly more uniform sensitivity profiles through the chest region. Overall, the advantages of 3D EIT are compelling.

  19. Copper Electrodeposition for 3D Integration

    OpenAIRE

    Beica , Rozalia; Sharbono , Charles; Ritzdorf , Tom

    2008-01-01

    Submitted on behalf of EDA Publishing Association (http://irevues.inist.fr/handle/2042/16838); International audience; Two dimensional (2D) integration has been the traditional approach for IC integration. Due to increasing demands for providing electronic devices with superior performance and functionality in more efficient and compact packages, has driven the semiconductor industry to develop more advanced packaging technologies. Three-dimensional (3D) approaches address both miniaturizatio...

  20. 3-D image reconstruction in radiology

    International Nuclear Information System (INIS)

    Grangeat, P.

    1999-01-01

    In this course, we present highlights on fully 3-D image reconstruction algorithms used in 3-D X-ray Computed Tomography (3-D-CT) and 3-D Rotational Radiography (3-D-RR). We first consider the case of spiral CT with a one-row detector. Starting from the 2-D fan-beam inversion formula for a circular trajectory, we introduce spiral CT 3-D image reconstruction algorithm using axial interpolation for each transverse slice. In order to improve the X-ray detection efficiency and to speed the acquisition process, the future is to use multi-row detectors associated with small angle cone-beam geometry. The generalization of the 2-D fan-beam image reconstruction algorithm to cone beam defined direct inversion formula referred as Feldkamp's algorithm for a circular trajectory and Wang's algorithm for a spiral trajectory. However, large area detectors does exist such as Radiological Image Intensifiers or in a near future solid state detectors. To get a larger zoom effect, it defines a cone-beam geometry associated with a large aperture angle. For this case, we introduce indirect image reconstruction algorithm by plane re-binning in the Radon domain. We will present some results from a prototype MORPHOMETER device using the RADON reconstruction software. Lastly, we consider the special case of 3-D Rotational Digital Subtraction Angiography with a restricted number of views. We introduce constraint optimization algorithm using quadratic, entropic or half-quadratic constraints. Generalized ART (Algebraic Reconstruction Technique) iterative reconstruction algorithm can be derived from the Bregman algorithm. We present reconstructed vascular trees from a prototype MORPHOMETER device. (author)

  1. 3D-LSI technology for image sensor

    International Nuclear Information System (INIS)

    Motoyoshi, Makoto; Koyanagi, Mitsumasa

    2009-01-01

    Recently, the development of three-dimensional large-scale integration (3D-LSI) technologies has accelerated and has advanced from the research level or the limited production level to the investigation level, which might lead to mass production. By separating 3D-LSI technology into elementary technologies such as (1) through silicon via (TSV) formation, (2) bump formation, (3) wafer thinning, (4) chip/wafer alignment, and (5) chip/wafer stacking and reconstructing the entire process and structure, many methods to realize 3D-LSI devices can be developed. However, by considering a specific application, the supply chain of base wafers, and the purpose of 3D integration, a few suitable combinations can be identified. In this paper, we focus on the application of 3D-LSI technologies to image sensors. We describe the process and structure of the chip size package (CSP), developed on the basis of current and advanced 3D-LSI technologies, to be used in CMOS image sensors. Using the current LSI technologies, CSPs for 1.3 M, 2 M, and 5 M pixel CMOS image sensors were successfully fabricated without any performance degradation. 3D-LSI devices can be potentially employed in high-performance focal-plane-array image sensors. We propose a high-speed image sensor with an optical fill factor of 100% to be developed using next-generation 3D-LSI technology and fabricated using micro(μ)-bumps and micro(μ)-TSVs.

  2. Feasibility study of P2P-type system architecture with 3D medical image data support for medical integrated network systems

    International Nuclear Information System (INIS)

    Noji, Tamotsu; Arino, Masashi; Suto, Yasuzo

    2010-01-01

    We are investigating an integrated medical network system with an electronic letter of introduction function and a 3D image support function operating in the Internet environment. However, the problems with current C/S (client/server)-type systems are inadequate security countermeasures and insufficient transmission availability. In this report, we propose a medical information cooperation system architecture that employs a P2P (peer-to-peer)-type communication method rather than a C/S-type method, which helps to prevent a reduction in processing speed when large amounts of data (such as 3D images) are transferred. In addition, a virtual clinic was created and a feasibility study was conducted to evaluate the P2P-type system. The results showed that efficiency was improved by about 77% in real-time transmission, suggesting that this system may be suitable for practical application. (author)

  3. Integration of 3D imaging data in the assessment of aortic stenosis: impact on classification of disease severity.

    Science.gov (United States)

    O'Brien, Bridget; Schoenhagen, Paul; Kapadia, Samir R; Svensson, Lars G; Rodriguez, Leonardo; Griffin, Brian P; Tuzcu, E Murat; Desai, Milind Y

    2011-09-01

    In patients with aortic stenosis (AS), precise assessment of severity is critical for treatment decisions. Estimation of aortic valve area (AVA) with transthoracic echocardiographic (TTE)-continuity equation (CE) assumes a circular left ventricular outflow tract (LVOT). We evaluated incremental utility of 3D multidetector computed tomography (MDCT) over TTE assessment of AS severity. We included 51 patients (age, 81±8 years; 61% men; mean gradient, 42 ± 12 mm Hg) with calcific AS who underwent evaluation for treatment options. TTE parameters included systolic LVOT diameter (D) and continuous and pulsed wave (CW and PW) velocity-time integrals (VTI) through the LVOT and mean transaortic gradient. MDCT parameters included systolic LVOT area, ratio of maximal to minimal LVOT diameter (eccentricity index), and aortic planimetry (AVA(p)). TTE-CE AVA [(D(2)×0.786×VTIpw)/VTIcw] and dimensionless index (DI) [VTIpw/VTIcw] were calculated. Corrected AVA was calculated by substituting MDCT LVOT area into CE. The majority (96%) of patients had eccentric LVOT. LVOT area, measured on MDCT, was higher than on TTE (3.84 ± 0.8 cm(2) versus 3.03 ± 0.5 cm(2), P<0.01). TTE-AVA was smaller than AVA(p) and corrected AVA (0.67 ± 0.1cm(2), 0.82 ± 0.3 cm(2), and 0.86 ± 0.3 cm(2), P<0.01). Using TTE measurements alone, 73% of patients had congruence for severe AS (DI ≤0.25 and CE AVA <0.8 cm(2)), which increased to 92% using corrected CE. In patients with suspected severe AS, incorporation of MDCT-LVOT area into CE improves congruence for AS severity.

  4. 3D Printing Openable Imaging Phantom Design

    International Nuclear Information System (INIS)

    Kim, Myoung Keun; Won, Jun Hyeok; Lee, Seung Wook

    2017-01-01

    The purpose of this study is to design an openable phantom that can replace the internal measurement bar used for contrast comparison in order to increase the efficiency of manufacturing imaging phantom used in the medical industry and to improve convenience using 3D printer. Phantom concept design, 3D printing, and Image reconstruction were defined as the scope of the thesis. Also, we study metal artifact reduction with openable phantom. We have designed a Openable phantom using 3D printing, and have investigated metal artifact reduction after inserting a metallic material inside the phantom. The openable phantom can be adjusted at any time to suit the user's experiment and can be easily replaced and useful.

  5. 3D Integration for Superconducting Qubits

    Science.gov (United States)

    Rosenberg, Danna; Kim, David; Yost, Donna-Ruth; Mallek, Justin; Yoder, Jonilyn; Das, Rabindra; Racz, Livia; Hover, David; Weber, Steven; Kerman, Andrew; Oliver, William

    Superconducting qubits are a prime candidate for constructing a large-scale quantum processor due to their lithographic scalability, speed, and relatively long coherence times. Moving beyond the few qubit level, however, requires the use of a three-dimensional approach for routing control and readout lines. 3D integration techniques can be used to construct a structure where the sensitive qubits are shielded from a potentially-lossy readout and interconnect chip by an intermediate chip with through-substrate vias, with indium bump bonds providing structural support and electrical conductivity. We will discuss our work developing 3D-integrated coupled qubits, focusing on the characterization of 3D integration components and the effects on qubit performance and design. This research was funded by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA) via MIT Lincoln Laboratory under Air Force Contract No. FA8721-05-C-0002. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of ODNI, IARPA, or the US Government.

  6. An integrated circuit with transmit beamforming flip-chip bonded to a 2-D CMUT array for 3-D ultrasound imaging.

    Science.gov (United States)

    Wygant, Ira O; Jamal, Nafis S; Lee, Hyunjoo J; Nikoozadeh, Amin; Oralkan, Omer; Karaman, Mustafa; Khuri-Yakub, Butrus T

    2009-10-01

    State-of-the-art 3-D medical ultrasound imaging requires transmitting and receiving ultrasound using a 2-D array of ultrasound transducers with hundreds or thousands of elements. A tight combination of the transducer array with integrated circuitry eliminates bulky cables connecting the elements of the transducer array to a separate system of electronics. Furthermore, preamplifiers located close to the array can lead to improved receive sensitivity. A combined IC and transducer array can lead to a portable, high-performance, and inexpensive 3-D ultrasound imaging system. This paper presents an IC flip-chip bonded to a 16 x 16-element capacitive micromachined ultrasonic transducer (CMUT) array for 3-D ultrasound imaging. The IC includes a transmit beamformer that generates 25-V unipolar pulses with programmable focusing delays to 224 of the 256 transducer elements. One-shot circuits allow adjustment of the pulse widths for different ultrasound transducer center frequencies. For receiving reflected ultrasound signals, the IC uses the 32-elements along the array diagonals. The IC provides each receiving element with a low-noise 25-MHz-bandwidth transimpedance amplifier. Using a field-programmable gate array (FPGA) clocked at 100 MHz to operate the IC, the IC generated properly timed transmit pulses with 5-ns accuracy. With the IC flip-chip bonded to a CMUT array, we show that the IC can produce steered and focused ultrasound beams. We present 2-D and 3-D images of a wire phantom and 2-D orthogonal cross-sectional images (Bscans) of a latex heart phantom.

  7. 3D Membrane Imaging and Porosity Visualization

    KAUST Repository

    Sundaramoorthi, Ganesh

    2016-03-03

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore size in different layers orthogonal and parallel to the membrane surface. The 3D-reconstruction enabled additionally the visualization of pore interconnectivity in different parts of the membrane. The method was demonstrated for a block copolymer porous membrane and can be extended to other membranes with application in ultrafiltration, supports for forward osmosis, etc, offering a complete view of the transport paths in the membrane.

  8. 3D IMAGING USING COHERENT SYNCHROTRON RADIATION

    Directory of Open Access Journals (Sweden)

    Peter Cloetens

    2011-05-01

    Full Text Available Three dimensional imaging is becoming a standard tool for medical, scientific and industrial applications. The use of modem synchrotron radiation sources for monochromatic beam micro-tomography provides several new features. Along with enhanced signal-to-noise ratio and improved spatial resolution, these include the possibility of quantitative measurements, the easy incorporation of special sample environment devices for in-situ experiments, and a simple implementation of phase imaging. These 3D approaches overcome some of the limitations of 2D measurements. They require new tools for image analysis.

  9. Metrological characterization of 3D imaging devices

    Science.gov (United States)

    Guidi, G.

    2013-04-01

    Manufacturers often express the performance of a 3D imaging device in various non-uniform ways for the lack of internationally recognized standard requirements for metrological parameters able to identify the capability of capturing a real scene. For this reason several national and international organizations in the last ten years have been developing protocols for verifying such performance. Ranging from VDI/VDE 2634, published by the Association of German Engineers and oriented to the world of mechanical 3D measurements (triangulation-based devices), to the ASTM technical committee E57, working also on laser systems based on direct range detection (TOF, Phase Shift, FM-CW, flash LADAR), this paper shows the state of the art about the characterization of active range devices, with special emphasis on measurement uncertainty, accuracy and resolution. Most of these protocols are based on special objects whose shape and size are certified with a known level of accuracy. By capturing the 3D shape of such objects with a range device, a comparison between the measured points and the theoretical shape they should represent is possible. The actual deviations can be directly analyzed or some derived parameters can be obtained (e.g. angles between planes, distances between barycenters of spheres rigidly connected, frequency domain parameters, etc.). This paper shows theoretical aspects and experimental results of some novel characterization methods applied to different categories of active 3D imaging devices based on both principles of triangulation and direct range detection.

  10. Designing TSVs for 3D Integrated Circuits

    CERN Document Server

    Khan, Nauman

    2013-01-01

    This book explores the challenges and presents best strategies for designing Through-Silicon Vias (TSVs) for 3D integrated circuits.  It describes a novel technique to mitigate TSV-induced noise, the GND Plug, which is superior to others adapted from 2-D planar technologies, such as a backside ground plane and traditional substrate contacts. The book also investigates, in the form of a comparative study, the impact of TSV size and granularity, spacing of C4 connectors, off-chip power delivery network, shared and dedicated TSVs, and coaxial TSVs on the quality of power delivery in 3-D ICs. The authors provide detailed best design practices for designing 3-D power delivery networks.  Since TSVs occupy silicon real-estate and impact device density, this book provides four iterative algorithms to minimize the number of TSVs in a power delivery network. Unlike other existing methods, these algorithms can be applied in early design stages when only functional block- level behaviors and a floorplan are available....

  11. 3D Seismic Imaging using Marchenko Methods

    Science.gov (United States)

    Lomas, A.; Curtis, A.

    2017-12-01

    Marchenko methods are novel, data driven techniques that allow seismic wavefields from sources and receivers on the Earth's surface to be redatumed to construct wavefields with sources in the subsurface - including complex multiply-reflected waves, and without the need for a complex reference model. In turn, this allows subsurface images to be constructed at any such subsurface redatuming points (image or virtual receiver points). Such images are then free of artefacts from multiply-scattered waves that usually contaminate migrated seismic images. Marchenko algorithms require as input the same information as standard migration methods: the full reflection response from sources and receivers at the Earth's surface, and an estimate of the first arriving wave between the chosen image point and the surface. The latter can be calculated using a smooth velocity model estimated using standard methods. The algorithm iteratively calculates a signal that focuses at the image point to create a virtual source at that point, and this can be used to retrieve the signal between the virtual source and the surface. A feature of these methods is that the retrieved signals are naturally decomposed into up- and down-going components. That is, we obtain both the signal that initially propagated upwards from the virtual source and arrived at the surface, separated from the signal that initially propagated downwards. Figure (a) shows a 3D subsurface model with a variable density but a constant velocity (3000m/s). Along the surface of this model (z=0) in both the x and y directions are co-located sources and receivers at 20-meter intervals. The redatumed signal in figure (b) has been calculated using Marchenko methods from a virtual source (1200m, 500m and 400m) to the surface. For comparison the true solution is given in figure (c), and shows a good match when compared to figure (b). While these 2D redatuming and imaging methods are still in their infancy having first been developed in

  12. Using Integrated 2D and 3D Resistivity Imaging Methods for Illustrating the Mud-Fluid Conduits of the Wushanting Mud Volcanoes in Southwestern Taiwan

    Directory of Open Access Journals (Sweden)

    Ping-Yu Chang

    2011-01-01

    Full Text Available We conducted 2D and 3D looped resistivity surveys in the Wushanting Natural Landscape Preservation Area (WNLPA in order to understand the relationships of the mud-fluid conduits in the mud volcano system. 2D resistivity surveys were conducted along seven networked lines. Two separate C-shape looped electrode arrays surrounding the volcano craters were used in the study. First, the two 3D looped measurements were inverted separately. Yet the inverted 3D images of the mud-volcano system were inconsistent with the landscape features suggesting that artifacts perhaps appeared in the images. The 3D looped data were then combined with the 2D data for creating a global resistivity model of WNLPA. The resulting 3D image is consistent with the observed landscape features. With the resistivity model of WNLPA, we further tried to estimate the distribution of water content. The results suggest that the 3D resistivity image has the potential to resolve the dual porosity structures in the mudstone area. Last, we used a simplified WNLPA model for forward simulation in order to verify the field measurement results. We have concluded that the artifacts in the 3D looped images are in fact shadow effects from conductive objects out of the electrode loops, and that inverted images of combined 2D and 3D data provide detailed regional conductive structures in the WNLPA site.

  13. Imaging chemical reactions - 3D velocity mapping

    Science.gov (United States)

    Chichinin, A. I.; Gericke, K.-H.; Kauczok, S.; Maul, C.

    Visualising a collision between an atom or a molecule or a photodissociation (half-collision) of a molecule on a single particle and single quantum level is like watching the collision of billiard balls on a pool table: Molecular beams or monoenergetic photodissociation products provide the colliding reactants at controlled velocity before the reaction products velocity is imaged directly with an elaborate camera system, where one should keep in mind that velocity is, in general, a three-dimensional (3D) vectorial property which combines scattering angles and speed. If the processes under study have no cylindrical symmetry, then only this 3D product velocity vector contains the full information of the elementary process under study.

  14. 3D imaging, 3D printing and 3D virtual planning in endodontics.

    Science.gov (United States)

    Shah, Pratik; Chong, B S

    2018-03-01

    The adoption and adaptation of recent advances in digital technology, such as three-dimensional (3D) printed objects and haptic simulators, in dentistry have influenced teaching and/or management of cases involving implant, craniofacial, maxillofacial, orthognathic and periodontal treatments. 3D printed models and guides may help operators plan and tackle complicated non-surgical and surgical endodontic treatment and may aid skill acquisition. Haptic simulators may assist in the development of competency in endodontic procedures through the acquisition of psycho-motor skills. This review explores and discusses the potential applications of 3D printed models and guides, and haptic simulators in the teaching and management of endodontic procedures. An understanding of the pertinent technology related to the production of 3D printed objects and the operation of haptic simulators are also presented.

  15. Optical 3D watermark based digital image watermarking for telemedicine

    Science.gov (United States)

    Li, Xiao Wei; Kim, Seok Tae

    2013-12-01

    Region of interest (ROI) of a medical image is an area including important diagnostic information and must be stored without any distortion. This algorithm for application of watermarking technique for non-ROI of the medical image preserving ROI. The paper presents a 3D watermark based medical image watermarking scheme. In this paper, a 3D watermark object is first decomposed into 2D elemental image array (EIA) by a lenslet array, and then the 2D elemental image array data is embedded into the host image. The watermark extraction process is an inverse process of embedding. The extracted EIA through the computational integral imaging reconstruction (CIIR) technique, the 3D watermark can be reconstructed. Because the EIA is composed of a number of elemental images possesses their own perspectives of a 3D watermark object. Even though the embedded watermark data badly damaged, the 3D virtual watermark can be successfully reconstructed. Furthermore, using CAT with various rule number parameters, it is possible to get many channels for embedding. So our method can recover the weak point having only one transform plane in traditional watermarking methods. The effectiveness of the proposed watermarking scheme is demonstrated with the aid of experimental results.

  16. 3-D Imaging Using Row–Column-Addressed Arrays With Integrated Apodization. Part I: Apodization Design and Line Element Beamforming

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer; Christiansen, Thomas Lehrmann; Thomsen, Erik Vilain

    2015-01-01

    -integrated apodization increased the apparent diameter of the vessel from 2.0 mm to 2.4 mm, corresponding to an increase from 67% to 80% of the true vessel diameter. The line element beamforming approach is shown to be essential for achieving correct time-of-flight calculations, and hence avoid geometrical distortions...

  17. Thermal Management in Fine-Grained 3-D Integrated Circuits

    OpenAIRE

    Iqbal, Md Arif; Macha, Naveen Kumar; Danesh, Wafi; Hossain, Sehtab; Rahman, Mostafizur

    2018-01-01

    For beyond 2-D CMOS logic, various 3-D integration approaches specially transistor based 3-D integrations such as monolithic 3-D [1], Skybridge [2], SN3D [3] holds most promise. However, such 3D architectures within small form factor increase hotspots and demand careful consideration of thermal management at all levels of integration [4] as stacked transistors are detached from the substrate (i.e., heat sink). Traditional system level approaches such as liquid cooling [5], heat spreader [6], ...

  18. Biomaterials for integration with 3-D bioprinting.

    Science.gov (United States)

    Skardal, Aleksander; Atala, Anthony

    2015-03-01

    Bioprinting has emerged in recent years as an attractive method for creating 3-D tissues and organs in the laboratory, and therefore is a promising technology in a number of regenerative medicine applications. It has the potential to (i) create fully functional replacements for damaged tissues in patients, and (ii) rapidly fabricate small-sized human-based tissue models, or organoids, for diagnostics, pathology modeling, and drug development. A number of bioprinting modalities have been explored, including cellular inkjet printing, extrusion-based technologies, soft lithography, and laser-induced forward transfer. Despite the innovation of each of these technologies, successful implementation of bioprinting relies heavily on integration with compatible biomaterials that are responsible for supporting the cellular components during and after biofabrication, and that are compatible with the bioprinting device requirements. In this review, we will evaluate a variety of biomaterials, such as curable synthetic polymers, synthetic gels, and naturally derived hydrogels. Specifically we will describe how they are integrated with the bioprinting technologies above to generate bioprinted constructs with practical application in medicine.

  19. Ray-based approach to integrated 3D visual communication

    Science.gov (United States)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  20. Advanced 3-D analysis, client-server systems, and cloud computing-Integration of cardiovascular imaging data into clinical workflows of transcatheter aortic valve replacement.

    Science.gov (United States)

    Schoenhagen, Paul; Zimmermann, Mathis; Falkner, Juergen

    2013-06-01

    Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR.

  1. Design of 3D integrated circuits and systems

    CERN Document Server

    Sharma, Rohit

    2014-01-01

    Three-dimensional (3D) integration of microsystems and subsystems has become essential to the future of semiconductor technology development. 3D integration requires a greater understanding of several interconnected systems stacked over each other. While this vertical growth profoundly increases the system functionality, it also exponentially increases the design complexity. Design of 3D Integrated Circuits and Systems tackles all aspects of 3D integration, including 3D circuit and system design, new processes and simulation techniques, alternative communication schemes for 3D circuits and sys

  2. Photogrammetric 3D reconstruction using mobile imaging

    Science.gov (United States)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  3. Tomographic spectral imaging: microanalysis in 3D

    International Nuclear Information System (INIS)

    Kotula, P.G.; Keenan, M.R.; Michael, J.R.

    2003-01-01

    Full text: Spectral imaging, where a series of complete x-ray spectra are typically collected from a 2D area, holds great promise for comprehensive near-surface microanalysis. There are however numerous microanalysis problems where 3D chemical information is needed as well. In the SEM, some sort of sectioning (either mechanical or with a focused ion beam (FIB) tool) followed by x-ray mapping has, in the past, been utilized in an attempt to perform 3D microanalysis. Reliance on simple mapping has the potential to miss important chemical features as well as misidentify others. In this paper we will describe the acquisition of serial-section tomographic spectral images (TSI) with a dual-beam FIB/SEM equipped with an EDS system. We will also describe the application of a modified version of our multivariate statistical analysis algorithms to TSIs. Serial sectioning was performed with a FEI DB-235 FIB/SEM. Firstly, the specimen normal was tilted to the optic axis of the FIB column and a trench was milled into the surface of the specimen. A second trench was then milled perpendicular to the first to provide visibility of the entire analysis surface to the x-ray detector. In addition, several fiducial markers were milled into the surface to allow for alignment from slice to slice. The electron column is at an angle of 52 deg to the ion column so the electron beam can 'see' the analysis surface milled by the FIB with no additional specimen tilting or rotation. Likewise the x-ray detector is at a radial angle of 45 deg to the plane of the electron and ion columns (about the electron column) and a take-off-angle of 35 deg with respect to an untilted specimen so it can 'see' the analysis surface as well with no additional sample tilting or rotation. Spectral images were acquired from regions 40 μm wide and 20μm deep for each slice. Approximately 1μm/slice was milled and 10-12 total slices were cut. Spectral images were acquired with a Thermo NORAN Vantage (Digital imaging

  4. 3-D computer graphics based on integral photography.

    Science.gov (United States)

    Naemura, T; Yoshida, T; Harashima, H

    2001-02-12

    Integral photography (IP), which is one of the ideal 3-D photographic technologies, can be regarded as a method of capturing and displaying light rays passing through a plane. The NHK Science and Technical Research Laboratories have developed a real-time IP system using an HDTV camera and an optical fiber array. In this paper, the authors propose a method of synthesizing arbitrary views from IP images captured by the HDTV camera. This is a kind of image-based rendering system, founded on the 4-D data space Representation of light rays. Experimental results show the potential to improve the quality of images rendered by computer graphics techniques.

  5. 3D object-oriented image analysis in 3D geophysical modelling

    DEFF Research Database (Denmark)

    Fadel, I.; van der Meijde, M.; Kerle, N.

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  6. Development of 3D integrated circuits for HEP

    International Nuclear Information System (INIS)

    Yarema, R.; Fermilab

    2006-01-01

    Three dimensional integrated circuits are well suited to improving circuit bandwidth and increasing effective circuit density. Recent advances in industry have made 3D integrated circuits an option for HEP. The 3D technology is discussed in this paper and several examples are shown. Design of a 3D demonstrator chip for the ILC is presented

  7. Handbook of 3D machine vision optical metrology and imaging

    CERN Document Server

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  8. Novel System for Real-Time Integration of 3-D Echocardiography and Fluoroscopy for Image-Guided Cardiac Interventions: Preclinical Validation and Clinical Feasibility Evaluation

    Science.gov (United States)

    Housden, R. James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C. Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D.; Razavi, Reza; Rhode, Kawal S.

    2014-01-01

    Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures. PMID:27170872

  9. Integrated Biogeomorphological Modeling Using Delft3D

    Science.gov (United States)

    Ye, Q.; Jagers, B.

    2011-12-01

    The skill of numerical morphological models has improved significantly from the early 2D uniform, total load sediment models (with steady state or infrequent wave updates) to recent 3D hydrodynamic models with multiple suspended and bed load sediment fractions and bed stratigraphy (online coupled with waves). Although there remain many open questions within this combined field of hydro- and morphodynamics, we observe an increasing need to include biological processes in the overall dynamics. In riverine and inter-tidal environments, there is often an important influence by riparian vegetation and macrobenthos. Over the past decade more and more researchers have started to extend the simulation environment with wrapper scripts and other quick code hacks to estimate their influence on morphological development in coastal, estuarine and riverine environments. Although one can in this way quickly analyze different approaches, these research tools have generally not been designed with reuse, performance and portability in mind. We have now implemented a reusable, flexible, and efficient two-way link between the Delft3D open source framework for hydrodynamics, waves and morphology, and the water quality and ecology modules. The same link will be used for 1D, 2D and 3D modeling on networks and both structured and unstructured grids. We will describe the concepts of the overall system, and illustrate it with some first results.

  10. An automated 3D reconstruction method of UAV images

    Science.gov (United States)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  11. Integrating 3D modeling, photogrammetry and design

    CERN Document Server

    Foster, Shaun

    2014-01-01

    This book looks at the convergent nature of technology and its relationship to the field of photogrammetry and 3D design. This is a facet of a broader discussion of the nature of technology itself and the relationship of technology to art, as well as an examination of the educational process. In the field of technology-influenced design-based education it is natural to push for advanced technology, yet within a larger institution the constraints of budget and adherence to tradition must be accepted. These opposing forces create a natural balance; in some cases constraints lead to greater creat

  12. Minimally invasive vascular imaging using 3D-CTA and 3D-MRA. Update

    International Nuclear Information System (INIS)

    Hayashi, Hiromitsu; Kawamata, Hiroshi; Takagi, Ryo; Amano, Yasuo; Wakabayashi, Hiroyuki; Ichikawa, Kazuo; Kumazaki, Tatsuo

    1998-01-01

    Conventional angiography is considered the standard of reference for diagnostic imaging of vascular diseases with respect to its temporal and spatial resolution. This procedure, however is invasive and repeated studies are difficult, and arterial complications are occasionally associated in catheter-based conventional angiography. Recent advances in diagnostic imaging have facilitated three-dimensional CT angiography (3D-CTA) using the volumetric acquisition capabilities inherent in spiral CT and three-dimensional MR angiography (3D-MRA) using the 3D gradient-echo sequence with a bolus injection of Gd-DTPA. These techniques can provide vascular images exceedingly similar to conventional angiograms within a short acquisition time. 3D-CTA and 3D-MRA are considered to be promising, minimally invasive methods for obtaining images of the vasculature, and alternatives to catheter angiography. This study reviews the current status of 3D-CTA and 3D-MRA, with emphasis on the clinical usefulness of three-dimensional diagnostic imaging for the evaluation of diverse vascular pathologies. (author)

  13. 3D widefield light microscope image reconstruction without dyes

    Science.gov (United States)

    Larkin, S.; Larson, J.; Holmes, C.; Vaicik, M.; Turturro, M.; Jurkevich, A.; Sinha, S.; Ezashi, T.; Papavasiliou, G.; Brey, E.; Holmes, T.

    2015-03-01

    3D image reconstruction using light microscope modalities without exogenous contrast agents is proposed and investigated as an approach to produce 3D images of biological samples for live imaging applications. Multimodality and multispectral imaging, used in concert with this 3D optical sectioning approach is also proposed as a way to further produce contrast that could be specific to components in the sample. The methods avoid usage of contrast agents. Contrast agents, such as fluorescent or absorbing dyes, can be toxic to cells or alter cell behavior. Current modes of producing 3D image sets from a light microscope, such as 3D deconvolution algorithms and confocal microscopy generally require contrast agents. Zernike phase contrast (ZPC), transmitted light brightfield (TLB), darkfield microscopy and others can produce contrast without dyes. Some of these modalities have not previously benefitted from 3D image reconstruction algorithms, however. The 3D image reconstruction algorithm is based on an underlying physical model of scattering potential, expressed as the sample's 3D absorption and phase quantities. The algorithm is based upon optimizing an objective function - the I-divergence - while solving for the 3D absorption and phase quantities. Unlike typical deconvolution algorithms, each microscope modality, such as ZPC or TLB, produces two output image sets instead of one. Contrast in the displayed image and 3D renderings is further enabled by treating the multispectral/multimodal data as a feature set in a mathematical formulation that uses the principal component method of statistics.

  14. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  15. Measurable realistic image-based 3D mapping

    Science.gov (United States)

    Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.

    2011-12-01

    Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable

  16. Integration of real-time 3D capture, reconstruction, and light-field display

    Science.gov (United States)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  17. Micromachined Ultrasonic Transducers for 3-D Imaging

    DEFF Research Database (Denmark)

    Christiansen, Thomas Lehrmann

    of state-of-the-art 3-D ultrasound systems. The focus is on row-column addressed transducer arrays. This previously sparsely investigated addressing scheme offers a highly reduced number of transducer elements, resulting in reduced transducer manufacturing costs and data processing. To produce...... such transducer arrays, capacitive micromachined ultrasonic transducer (CMUT) technology is chosen for this project. Properties such as high bandwidth and high design flexibility makes this an attractive transducer technology, which is under continuous development in the research community. A theoretical...... treatment of CMUTs is presented, including investigations of the anisotropic plate behaviour and modal radiation patterns of such devices. Several new CMUT fabrication approaches are developed and investigated in terms of oxide quality and surface protrusions, culminating in a simple four-mask process...

  18. 3D Imaging with Structured Illumination for Advanced Security Applications

    Energy Technology Data Exchange (ETDEWEB)

    Birch, Gabriel Carisle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dagel, Amber Lynn [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kast, Brian A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Collin S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  19. On Alternative Approaches to 3D Image Perception: Monoscopic 3D Techniques

    Science.gov (United States)

    Blundell, Barry G.

    2015-06-01

    In the eighteenth century, techniques that enabled a strong sense of 3D perception to be experienced without recourse to binocular disparities (arising from the spatial separation of the eyes) underpinned the first significant commercial sales of 3D viewing devices and associated content. However following the advent of stereoscopic techniques in the nineteenth century, 3D image depiction has become inextricably linked to binocular parallax and outside the vision science and arts communities relatively little attention has been directed towards earlier approaches. Here we introduce relevant concepts and terminology and consider a number of techniques and optical devices that enable 3D perception to be experienced on the basis of planar images rendered from a single vantage point. Subsequently we allude to possible mechanisms for non-binocular parallax based 3D perception. Particular attention is given to reviewing areas likely to be thought-provoking to those involved in 3D display development, spatial visualization, HCI, and other related areas of interdisciplinary research.

  20. Developing 3D Imaging Programmes-Workflow and Quality Control

    OpenAIRE

    Hess, M.; Robson, S.; Serpico, M.; Amati, G.; Pridden, I.; Nelson, T.

    2016-01-01

    This article reports on a successful project for 3D imaging research, digital applications, and use of new technologies in the museum. The article will focus on the development and implementation of a viable workflow for the production of high-quality 3D models of museum objects, based on the 3D laser scanning and photogrammetry of selected ancient Egyptian artefacts. The development of a robust protocol for the complete process chain for imaging cultural heritage artefacts, from the acquisit...

  1. [EYECUBE as 3D multimedia imaging in macular diagnostics].

    Science.gov (United States)

    Hassenstein, Andrea; Scholz, F; Richard, G

    2011-11-01

    In the new generation of EYECUBE devices, the angiography image and the OCT are included in a 3D illustration as an integration. Other diagnostic procedures such as autofluorescence and ICG can also be correlated to the OCT. The aim was to precisely classify various two-dimensional findings in relation to each other. The new generation of OCT devices enables imaging with a low incidence of motion artefacts with very good fundus image quality - and with that, permits a largely automatic classification. The feature enabling the integration of the EYECUBE was further developed with new software, so that not only the topographic image (red-free, autofluorescence) can be correlated to the Cirrus OCT, but also all other findings gathered within the same time frame can be correlated to each other. These were brightened and projected onto the cube surface in a defined interval. The imaging procedures can be selected in a menu toolbar. Topographic volumetry OCT images can be overlayed. The practical application of the new method was tested on patients with macular disorders. By lightening up the results from various diagnostic procedures, it is possible of late to directly compare pathologies to each other and to the OCT results. In all patients (n = 45 eyes) with good single-image quality, the automated integration into the EYECUBE was possible (to a great extent). The application is not dependent on a certain type of device used in the procedures performed. The increasing level of precision in imaging procedures and the handling of large data volumes has led to the possibility of examining each macular diagnostics procedure from the comparative perspective: imaging (photo) with perfusion (FLA, ICG) and morphology (OCT). The exclusion of motion artefacts and the reliable scan position in the course of the imaging process increases the informative value of OCT. © Georg Thieme Verlag KG Stuttgart · New York.

  2. Integrating visible light 3D scanning into the everyday world

    Science.gov (United States)

    Straub, Jeremy

    2015-05-01

    Visible light 3D scanning offers the potential to non-invasively and nearly non-perceptibly incorporate 3D imaging into the everyday world. This paper considers the various possible uses of visible light 3D scanning technology. It discusses multiple possible usage scenarios including in hospitals, security perimeter settings and retail environments. The paper presents a framework for assessing the efficacy of visible light 3D scanning for a given application (and compares this to other scanning approaches such as those using blue light or lasers). It also discusses ethical and legal considerations relevant to real-world use and concludes by presenting a decision making framework.

  3. Development of 3-D Medical Image VIsualization System

    African Journals Online (AJOL)

    User

    uses standard 2-D medical imaging inputs and generates medical images of human body parts ... light wave from points on the 3-D object(s) in ... tools, and communication bandwidth cannot .... locations along the track that correspond with.

  4. Denoising imaging polarimetry by adapted BM3D method.

    Science.gov (United States)

    Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R

    2018-04-01

    In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.

  5. 3D Reconstruction of NMR Images by LabVIEW

    Directory of Open Access Journals (Sweden)

    Peter IZAK

    2007-01-01

    Full Text Available This paper introduces the experiment of 3D reconstruction NMR images via virtual instrumentation - LabVIEW. The main idea is based on marching cubes algorithm and image processing implemented by module of Vision assistant. The two dimensional images shot by the magnetic resonance device provide information about the surface properties of human body. There is implemented algorithm which can be used for 3D reconstruction of magnetic resonance images in biomedical application.

  6. 3D Interpolation Method for CT Images of the Lung

    Directory of Open Access Journals (Sweden)

    Noriaki Asada

    2003-06-01

    Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.

  7. A 3D printed helical antenna with integrated lens

    KAUST Repository

    Farooqui, Muhammad Fahad; Shamim, Atif

    2015-01-01

    A novel antenna configuration comprising a helical antenna with an integrated lens is demonstrated in this work. The antenna is manufactured by a unique combination of 3D printing of plastic material (ABS) and inkjet printing of silver nano

  8. Acoustic 3D imaging of dental structures

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Hume, W.R. [California Univ., Los Angeles, CA (United States); Douglass, G.D. [California Univ., San Francisco, CA (United States)

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  9. A 3D Hybrid Integration Methodology for Terabit Transceivers

    DEFF Research Database (Denmark)

    Dong, Yunfeng; Johansen, Tom Keinicke; Zhurbenko, Vitaliy

    2015-01-01

    integration are described. An equivalent circuit model of the via-throughs connecting the RF circuitry to the modulator is proposed and its lumped element parameters are extracted. Wire bonding transitions between the driving and RF circuitry were designed and simulated. An optimized 3D interposer design......This paper presents a three-dimensional (3D) hybrid integration methodology for terabit transceivers. The simulation methodology for multi-conductor structures are explained. The effect of ground vias on the RF circuitry and the preferred interposer substrate material for large bandwidth 3D hybrid...

  10. The 3D Lagrangian Integral Method. Henrik Koblitz Rasmussen

    DEFF Research Database (Denmark)

    Rasmussen, Henrik Koblitz

    2003-01-01

    . This are processes such as thermo-forming, gas-assisted injection moulding and all kind of simultaneous multi-component polymer processing operations. Though, in all polymer processing operations free surfaces (or interfaces) are present and the dynamic of these surfaces are of interest. In the "3D Lagrangian...... Integral Method" to simulate viscoelastic flow, the governing equations are solved for the particle positions (Lagrangian kinematics). Therefore, the transient motion of surfaces can be followed in a particularly simple fashion even in 3D viscoelastic flow. The "3D Lagrangian Integral Method" is described...

  11. 3D surface reconstruction using optical flow for medical imaging

    International Nuclear Information System (INIS)

    Weng, Nan; Yang, Yee-Hong; Pierson, R.

    1996-01-01

    The recovery of a 3D model from a sequence of 2D images is very useful in medical image analysis. Image sequences obtained from the relative motion between the object and the camera or the scanner contain more 3D information than a single image. Methods to visualize the computed tomograms can be divided into two approaches: the surface rendering approach and the volume rendering approach. A new surface rendering method using optical flow is proposed. Optical flow is the apparent motion in the image plane produced by the projection of the real 3D motion onto 2D image. In this paper, the object remains stationary while the scanner undergoes translational motion. The 3D motion of an object can be recovered from the optical flow field using additional constraints. By extracting the surface information from 3D motion, it is possible to get an accurate 3D model of the object. Both synthetic and real image sequences have been used to illustrate the feasibility of the proposed method. The experimental results suggest that the proposed method is suitable for the reconstruction of 3D models from ultrasound medical images as well as other computed tomograms

  12. Diffractive optical element for creating visual 3D images.

    Science.gov (United States)

    Goncharsky, Alexander; Goncharsky, Anton; Durlevich, Svyatoslav

    2016-05-02

    A method is proposed to compute and synthesize the microrelief of a diffractive optical element to produce a new visual security feature - the vertical 3D/3D switch effect. The security feature consists in the alternation of two 3D color images when the diffractive element is tilted up/down. Optical security elements that produce the new security feature are synthesized using electron-beam technology. Sample optical security elements are manufactured that produce 3D to 3D visual switch effect when illuminated by white light. Photos and video records of the vertical 3D/3D switch effect of real optical elements are presented. The optical elements developed can be replicated using standard equipment employed for manufacturing security holograms. The new optical security feature is easy to control visually, safely protected against counterfeit, and designed to protect banknotes, documents, ID cards, etc.

  13. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    Energy Technology Data Exchange (ETDEWEB)

    Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van [Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Punt, Mark; Aben, Jean-Paul [Pie Medical Imaging, 6227 AJ Maastricht (Netherlands); Schultz, Carl [Department of Cardiology, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Niessen, Wiro [Quantitative Imaging Group, Faculty of Applied Sciences, Delft University of Technology, 2628 CJ Delft, The Netherlands and Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands)

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  14. 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    of planetary surfaces, but other purposes is considered as well. The system performance is measured with respect to the precision and the time consumption.The reconstruction process is divided into four major areas: Acquisition, calibration, matching/reconstruction and presentation. Each of these areas...... are treated individually. A detailed treatment of various lens distortions is required, in order to correct for these problems. This subject is included in the acquisition part. In the calibration part, the perspective distortion is removed from the images. Most attention has been paid to the matching problem...

  15. Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization

    Science.gov (United States)

    2014-05-01

    1 Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization David N. Ford...2014 4. TITLE AND SUBTITLE Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization 5a...Manufacturing ( 3D printing ) 2 Research Context Problem: Learning curve savings forecasted in SHIPMAIN maintenance initiative have not materialized

  16. 3D reconstruction based on light field images

    Science.gov (United States)

    Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei

    2018-04-01

    This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.

  17. Deep learning for objective quality assessment of 3D images

    NARCIS (Netherlands)

    Mocanu, D.C.; Exarchakos, G.; Liotta, A.

    2014-01-01

    Improving the users' Quality of Experience (QoE) in modern 3D Multimedia Systems is a challenging proposition, mainly due to our limited knowledge of 3D image Quality Assessment algorithms. While subjective QoE methods would better reflect the nature of human perception, these are not suitable in

  18. 3D quantitative phase imaging of neural networks using WDT

    Science.gov (United States)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  19. MULTI SENSOR DATA INTEGRATION FOR AN ACCURATE 3D MODEL GENERATION

    Directory of Open Access Journals (Sweden)

    S. Chhatkuli

    2015-05-01

    Full Text Available The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other’s weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  20. Preliminary examples of 3D vector flow imaging

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes; Stuart, Matthias Bo; Tomov, Borislav Gueorguiev

    2013-01-01

    This paper presents 3D vector flow images obtained using the 3D Transverse Oscillation (TO) method. The method employs a 2D transducer and estimates the three velocity components simultaneously, which is important for visualizing complex flow patterns. Data are acquired using the experimental ult...... as opposed to magnetic resonance imaging (MRI). The results demonstrate that the 3D TO method is capable of performing 3D vector flow imaging.......This paper presents 3D vector flow images obtained using the 3D Transverse Oscillation (TO) method. The method employs a 2D transducer and estimates the three velocity components simultaneously, which is important for visualizing complex flow patterns. Data are acquired using the experimental...... ultrasound scanner SARUS on a flow rig system with steady flow. The vessel of the flow-rig is centered at a depth of 30 mm, and the flow has an expected 2D circular-symmetric parabolic prole with a peak velocity of 1 m/s. Ten frames of 3D vector flow images are acquired in a cross-sectional plane orthogonal...

  1. Software for 3D diagnostic image reconstruction and analysis

    International Nuclear Information System (INIS)

    Taton, G.; Rokita, E.; Sierzega, M.; Klek, S.; Kulig, J.; Urbanik, A.

    2005-01-01

    Recent advances in computer technologies have opened new frontiers in medical diagnostics. Interesting possibilities are the use of three-dimensional (3D) imaging and the combination of images from different modalities. Software prepared in our laboratories devoted to 3D image reconstruction and analysis from computed tomography and ultrasonography is presented. In developing our software it was assumed that it should be applicable in standard medical practice, i.e. it should work effectively with a PC. An additional feature is the possibility of combining 3D images from different modalities. The reconstruction and data processing can be conducted using a standard PC, so low investment costs result in the introduction of advanced and useful diagnostic possibilities. The program was tested on a PC using DICOM data from computed tomography and TIFF files obtained from a 3D ultrasound system. The results of the anthropomorphic phantom and patient data were taken into consideration. A new approach was used to achieve spatial correlation of two independently obtained 3D images. The method relies on the use of four pairs of markers within the regions under consideration. The user selects the markers manually and the computer calculates the transformations necessary for coupling the images. The main software feature is the possibility of 3D image reconstruction from a series of two-dimensional (2D) images. The reconstructed 3D image can be: (1) viewed with the most popular methods of 3D image viewing, (2) filtered and processed to improve image quality, (3) analyzed quantitatively (geometrical measurements), and (4) coupled with another, independently acquired 3D image. The reconstructed and processed 3D image can be stored at every stage of image processing. The overall software performance was good considering the relatively low costs of the hardware used and the huge data sets processed. The program can be freely used and tested (source code and program available at

  2. Image based 3D city modeling : Comparative study

    Directory of Open Access Journals (Sweden)

    S. P. Singh

    2014-06-01

    Full Text Available 3D city model is a digital representation of the Earth’s surface and it’s related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India. This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can’t do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good

  3. EISCAT Aperture Synthesis Imaging (EASI _3D) for the EISCAT_3D Project

    Science.gov (United States)

    La Hoz, Cesar; Belyey, Vasyl

    2012-07-01

    Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. The underlying physico-mathematical principles of the technique are the same as the technique employed in radioastronomy to image stellar objects; both require sophisticated inversion techniques to obtain reliable images.

  4. From medical imaging data to 3D printed anatomical models.

    Directory of Open Access Journals (Sweden)

    Thore M Bücking

    Full Text Available Anatomical models are important training and teaching tools in the clinical environment and are routinely used in medical imaging research. Advances in segmentation algorithms and increased availability of three-dimensional (3D printers have made it possible to create cost-efficient patient-specific models without expert knowledge. We introduce a general workflow that can be used to convert volumetric medical imaging data (as generated by Computer Tomography (CT to 3D printed physical models. This process is broken up into three steps: image segmentation, mesh refinement and 3D printing. To lower the barrier to entry and provide the best options when aiming to 3D print an anatomical model from medical images, we provide an overview of relevant free and open-source image segmentation tools as well as 3D printing technologies. We demonstrate the utility of this streamlined workflow by creating models of ribs, liver, and lung using a Fused Deposition Modelling 3D printer.

  5. Arbitrary modeling of TSVs for 3D integrated circuits

    CERN Document Server

    Salah, Khaled; El-Rouby, Alaa

    2014-01-01

    This book presents a wide-band and technology independent, SPICE-compatible RLC model for through-silicon vias (TSVs) in 3D integrated circuits. This model accounts for a variety of effects, including skin effect, depletion capacitance and nearby contact effects. Readers will benefit from in-depth coverage of concepts and technology such as 3D integration, Macro modeling, dimensional analysis and compact modeling, as well as closed form equations for the through silicon via parasitics. Concepts covered are demonstrated by using TSVs in applications such as a spiral inductor?and inductive-based

  6. 3D circuit integration for Vertex and other detectors

    Energy Technology Data Exchange (ETDEWEB)

    Yarema, Ray; /Fermilab

    2007-09-01

    High Energy Physics continues to push the technical boundaries for electronics. There is no area where this is truer than for vertex detectors. Lower mass and power along with higher resolution and radiation tolerance are driving forces. New technologies such as SOI CMOS detectors and three dimensional (3D) integrated circuits offer new opportunities to meet these challenges. The fundamentals for SOI CMOS detectors and 3D integrated circuits are discussed. Examples of each approach for physics applications are presented. Cost issues and ways to reduce development costs are discussed.

  7. 3D confocal imaging in CUBIC-cleared mouse heart

    Energy Technology Data Exchange (ETDEWEB)

    Nehrhoff, I.; Bocancea, D.; Vaquero, J.; Vaquero, J.J.; Lorrio, M.T.; Ripoll, J.; Desco, M.; Gomez-Gaviro, M.V.

    2016-07-01

    Acquiring high resolution 3D images of the heart enables the ability to study heart diseases more in detail. Here, the CUBIC (clear, unobstructed brain imaging cocktails and computational analysis) clearing protocol was adapted for thick mouse heart sections to increase the penetration depth of the confocal microscope lasers into the tissue. The adapted CUBIC clearing of the heart lets the antibody penetrate deeper into the tissue by a factor of five. The here shown protocol enables deep 3D highresolution image acquisition in the heart. This allows a much more accurate assessment of the cellular and structural changes that underlie heart diseases. (Author)

  8. 3D confocal imaging in CUBIC-cleared mouse heart

    International Nuclear Information System (INIS)

    Nehrhoff, I.; Bocancea, D.; Vaquero, J.; Vaquero, J.J.; Lorrio, M.T.; Ripoll, J.; Desco, M.; Gomez-Gaviro, M.V.

    2016-01-01

    Acquiring high resolution 3D images of the heart enables the ability to study heart diseases more in detail. Here, the CUBIC (clear, unobstructed brain imaging cocktails and computational analysis) clearing protocol was adapted for thick mouse heart sections to increase the penetration depth of the confocal microscope lasers into the tissue. The adapted CUBIC clearing of the heart lets the antibody penetrate deeper into the tissue by a factor of five. The here shown protocol enables deep 3D highresolution image acquisition in the heart. This allows a much more accurate assessment of the cellular and structural changes that underlie heart diseases. (Author)

  9. 3D Image Display Courses for Information Media Students.

    Science.gov (United States)

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators.

  10. Surface Explorations : 3D Moving Images as Cartographies of Time

    NARCIS (Netherlands)

    Verhoeff, N.

    2016-01-01

    Moving images of travel and exploration have a long history. In this essay I will examine how the trope of navigation in 3D moving images can work towards an intimate and haptic encounter with other times and other places – elsewhen and elsewhere. The particular navigational construction of space in

  11. Military efforts in nanosensors, 3D printing, and imaging detection

    Science.gov (United States)

    Edwards, Eugene; Booth, Janice C.; Roberts, J. Keith; Brantley, Christina L.; Crutcher, Sihon H.; Whitley, Michael; Kranz, Michael; Seif, Mohamed; Ruffin, Paul

    2017-04-01

    A team of researchers and support organizations, affiliated with the Army Aviation and Missile Research, Development, and Engineering Center (AMRDEC), has initiated multidiscipline efforts to develop nano-based structures and components for advanced weaponry, aviation, and autonomous air/ground systems applications. The main objective of this research is to exploit unique phenomena for the development of novel technology to enhance warfighter capabilities and produce precision weaponry. The key technology areas that the authors are exploring include nano-based sensors, analysis of 3D printing constituents, and nano-based components for imaging detection. By integrating nano-based devices, structures, and materials into weaponry, the Army can revolutionize existing (and future) weaponry systems by significantly reducing the size, weight, and cost. The major research thrust areas include the development of carbon nanotube sensors to detect rocket motor off-gassing; the application of current methodologies to assess materials used for 3D printing; and the assessment of components to improve imaging seekers. The status of current activities, associated with these key areas and their implementation into AMRDEC's research, is outlined in this paper. Section #2 outlines output data, graphs, and overall evaluations of carbon nanotube sensors placed on a 16 element chip and exposed to various environmental conditions. Section #3 summarizes the experimental results of testing various materials and resulting components that are supplementary to additive manufacturing/fused deposition modeling (FDM). Section #4 recapitulates a preliminary assessment of the optical and electromechanical components of seekers in an effort to propose components and materials that can work more effectively.

  12. Digital 3D Borobudur – Integration of 3D surveying and modeling techniques

    Directory of Open Access Journals (Sweden)

    D. Suwardhi

    2015-08-01

    Full Text Available The Borobudur temple (Indonesia is one of the greatest Buddhist monuments in the world, now listed as an UNESCO World Heritage Site. The present state of the temple is the result of restorations after being exposed to natural disasters several times. Today there is still a growing rate of deterioration of the building stones whose causes need further researches. Monitoring programs, supported at institutional level, have been effectively executed to observe the problem. The paper presents the latest efforts to digitally document the Borobudur Temple and its surrounding area in 3D with photogrammetric techniques. UAV and terrestrial images were acquired to completely digitize the temple, produce DEM, orthoimages and maps at 1:100 and 1:1000 scale. The results of the project are now employed by the local government organizations to manage the heritage area and plan new policies for the conservation and preservation of the UNESCO site. In order to help data management and policy makers, a web-based information system of the heritage area was also built to visualize and easily access all the data and achieved 3D results.

  13. Image-based RSA: Roentgen stereophotogrammetric analysis based on 2D-3D image registration.

    Science.gov (United States)

    de Bruin, P W; Kaptein, B L; Stoel, B C; Reiber, J H C; Rozing, P M; Valstar, E R

    2008-01-01

    Image-based Roentgen stereophotogrammetric analysis (IBRSA) integrates 2D-3D image registration and conventional RSA. Instead of radiopaque RSA bone markers, IBRSA uses 3D CT data, from which digitally reconstructed radiographs (DRRs) are generated. Using 2D-3D image registration, the 3D pose of the CT is iteratively adjusted such that the generated DRRs resemble the 2D RSA images as closely as possible, according to an image matching metric. Effectively, by registering all 2D follow-up moments to the same 3D CT, the CT volume functions as common ground. In two experiments, using RSA and using a micromanipulator as gold standard, IBRSA has been validated on cadaveric and sawbone scapula radiographs, and good matching results have been achieved. The accuracy was: |mu |RSA but higher than in vivo standard RSA. Because IBRSA does not require radiopaque markers, it adds functionality to the RSA method by opening new directions and possibilities for research, such as dynamic analyses using fluoroscopy on subjects without markers and computer navigation applications.

  14. 3D-vertical integration of sensors and electronics

    International Nuclear Information System (INIS)

    Lipton, R.

    2007-01-01

    Technologies are being developed which enable the vertical integration of sensors and electronics as well as multilayer electronic circuits. New thinning and wafer bonding techniques and the formation of small vias between resulting thin layers of electronics enable the design of dense integrated sensor/readout structures. We discuss candidate technologies based on SOI and bulk CMOS. A prototype 3D chip developed at Fermilab that incorporates three tiers of 0.18μm CMOS is described

  15. AN INTEGRATED PHOTOGRAMMETRIC AND PHOTOCLINOMETRIC APPROACH FOR PIXEL-RESOLUTION 3D MODELLING OF LUNAR SURFACE

    Directory of Open Access Journals (Sweden)

    W. C. Liu

    2018-04-01

    Full Text Available High-resolution 3D modelling of lunar surface is important for lunar scientific research and exploration missions. Photogrammetry is known for 3D mapping and modelling from a pair of stereo images based on dense image matching. However dense matching may fail in poorly textured areas and in situations when the image pair has large illumination differences. As a result, the actual achievable spatial resolution of the 3D model from photogrammetry is limited by the performance of dense image matching. On the other hand, photoclinometry (i.e., shape from shading is characterised by its ability to recover pixel-wise surface shapes based on image intensity and imaging conditions such as illumination and viewing directions. More robust shape reconstruction through photoclinometry can be achieved by incorporating images acquired under different illumination conditions (i.e., photometric stereo. Introducing photoclinometry into photogrammetric processing can therefore effectively increase the achievable resolution of the mapping result while maintaining its overall accuracy. This research presents an integrated photogrammetric and photoclinometric approach for pixel-resolution 3D modelling of the lunar surface. First, photoclinometry is interacted with stereo image matching to create robust and spatially well distributed dense conjugate points. Then, based on the 3D point cloud derived from photogrammetric processing of the dense conjugate points, photoclinometry is further introduced to derive the 3D positions of the unmatched points and to refine the final point cloud. The approach is able to produce one 3D point for each image pixel within the overlapping area of the stereo pair so that to obtain pixel-resolution 3D models. Experiments using the Lunar Reconnaissance Orbiter Camera - Narrow Angle Camera (LROC NAC images show the superior performances of the approach compared with traditional photogrammetric technique. The results and findings from this

  16. Naked-eye 3D imaging employing a modified MIMO micro-ring conjugate mirrors

    Science.gov (United States)

    Youplao, P.; Pornsuwancharoen, N.; Amiri, I. S.; Thieu, V. N.; Yupapin, P.

    2018-03-01

    In this work, the use of a micro-conjugate mirror that can produce the 3D image incident probe and display is proposed. By using the proposed system together with the concept of naked-eye 3D imaging, a pixel and a large volume pixel of a 3D image can be created and displayed as naked-eye perception, which is valuable for the large volume naked-eye 3D imaging applications. In operation, a naked-eye 3D image that has a large pixel volume will be constructed by using the MIMO micro-ring conjugate mirror system. Thereafter, these 3D images, formed by the first micro-ring conjugate mirror system, can be transmitted through an optical link to a short distance away and reconstructed via the recovery conjugate mirror at the other end of the transmission. The image transmission is performed by the Fourier integral in MATLAB and compares to the Opti-wave program results. The Fourier convolution is also included for the large volume image transmission. The simulation is used for the manipulation, where the array of a micro-conjugate mirror system is designed and simulated for the MIMO system. The naked-eye 3D imaging is confirmed by the concept of the conjugate mirror in both the input and output images, in terms of the four-wave mixing (FWM), which is discussed and interpreted.

  17. A 3D printed helical antenna with integrated lens

    KAUST Repository

    Farooqui, Muhammad Fahad

    2015-10-26

    A novel antenna configuration comprising a helical antenna with an integrated lens is demonstrated in this work. The antenna is manufactured by a unique combination of 3D printing of plastic material (ABS) and inkjet printing of silver nano-particle based metallic ink. The integration of lens enhances the gain by around 7 dB giving a peak gain of about 16.4 dBi at 9.4 GHz. The helical antenna operates in the end-fire mode and radiates a left-hand circularly polarized (LHCP) pattern. The 3-dB axial ratio (AR) bandwidth of the antenna with lens is 3.2 %. Due to integration of lens and fully printed processing, this antenna configuration offers high gain performance and requires low cost for manufacturing.

  18. Integration of aerial oblique imagery and terrestrial imagery for optimized 3D modeling in urban areas

    Science.gov (United States)

    Wu, Bo; Xie, Linfu; Hu, Han; Zhu, Qing; Yau, Eric

    2018-05-01

    Photorealistic three-dimensional (3D) models are fundamental to the spatial data infrastructure of a digital city, and have numerous potential applications in areas such as urban planning, urban management, urban monitoring, and urban environmental studies. Recent developments in aerial oblique photogrammetry based on aircraft or unmanned aerial vehicles (UAVs) offer promising techniques for 3D modeling. However, 3D models generated from aerial oblique imagery in urban areas with densely distributed high-rise buildings may show geometric defects and blurred textures, especially on building façades, due to problems such as occlusion and large camera tilt angles. Meanwhile, mobile mapping systems (MMSs) can capture terrestrial images of close-range objects from a complementary view on the ground at a high level of detail, but do not offer full coverage. The integration of aerial oblique imagery with terrestrial imagery offers promising opportunities to optimize 3D modeling in urban areas. This paper presents a novel method of integrating these two image types through automatic feature matching and combined bundle adjustment between them, and based on the integrated results to optimize the geometry and texture of the 3D models generated from aerial oblique imagery. Experimental analyses were conducted on two datasets of aerial and terrestrial images collected in Dortmund, Germany and in Hong Kong. The results indicate that the proposed approach effectively integrates images from the two platforms and thereby improves 3D modeling in urban areas.

  19. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    Directory of Open Access Journals (Sweden)

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  20. PHOTOGRAMMETRIC 3D BUILDING RECONSTRUCTION FROM THERMAL IMAGES

    Directory of Open Access Journals (Sweden)

    E. Maset

    2017-08-01

    Full Text Available This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.

  1. 3D Point Cloud Reconstruction from Single Plenoptic Image

    Directory of Open Access Journals (Sweden)

    F. Murgia

    2016-06-01

    Full Text Available Novel plenoptic cameras sample the light field crossing the main camera lens. The information available in a plenoptic image must be processed, in order to create the depth map of the scene from a single camera shot. In this paper a novel algorithm, for the reconstruction of 3D point cloud of the scene from a single plenoptic image, taken with a consumer plenoptic camera, is proposed. Experimental analysis is conducted on several test images, and results are compared with state of the art methodologies. The results are very promising, as the quality of the 3D point cloud from plenoptic image, is comparable with the quality obtained with current non-plenoptic methodologies, that necessitate more than one image.

  2. AUTOMATED CELL SEGMENTATION WITH 3D FLUORESCENCE MICROSCOPY IMAGES.

    Science.gov (United States)

    Kong, Jun; Wang, Fusheng; Teodoro, George; Liang, Yanhui; Zhu, Yangyang; Tucker-Burden, Carol; Brat, Daniel J

    2015-04-01

    A large number of cell-oriented cancer investigations require an effective and reliable cell segmentation method on three dimensional (3D) fluorescence microscopic images for quantitative analysis of cell biological properties. In this paper, we present a fully automated cell segmentation method that can detect cells from 3D fluorescence microscopic images. Enlightened by fluorescence imaging techniques, we regulated the image gradient field by gradient vector flow (GVF) with interpolated and smoothed data volume, and grouped voxels based on gradient modes identified by tracking GVF field. Adaptive thresholding was then applied to voxels associated with the same gradient mode where voxel intensities were enhanced by a multiscale cell filter. We applied the method to a large volume of 3D fluorescence imaging data of human brain tumor cells with (1) small cell false detection and missing rates for individual cells; and (2) trivial over and under segmentation incidences for clustered cells. Additionally, the concordance of cell morphometry structure between automated and manual segmentation was encouraging. These results suggest a promising 3D cell segmentation method applicable to cancer studies.

  3. Notes on integral identities for 3d supersymmetric dualities

    Science.gov (United States)

    Aghaei, Nezhla; Amariti, Antonio; Sekiguchi, Yuta

    2018-04-01

    Four dimensional N=2 Argyres-Douglas theories have been recently conjectured to be described by N=1 Lagrangian theories. Such models, once reduced to 3d, should be mirror dual to Lagrangian N=4 theories. This has been numerically checked through the matching of the partition functions on the three sphere. In this article, we provide an analytic derivation for this result in the A 2 n-1 case via hyperbolic hypergeometric integrals. We study the D 4 case as well, commenting on some open questions and possible resolutions. In the second part of the paper we discuss other integral identities leading to the matching of the partition functions in 3d dual pairs involving higher monopole superpotentials.

  4. 3D analysis of semiconductor devices: A combination of 3D imaging and 3D elemental analysis

    Science.gov (United States)

    Fu, Bianzhu; Gribelyuk, Michael A.

    2018-04-01

    3D analysis of semiconductor devices using a combination of scanning transmission electron microscopy (STEM) Z-contrast tomography and energy dispersive spectroscopy (EDS) elemental tomography is presented. 3D STEM Z-contrast tomography is useful in revealing the depth information of the sample. However, it suffers from contrast problems between materials with similar atomic numbers. Examples of EDS elemental tomography are presented using an automated EDS tomography system with batch data processing, which greatly reduces the data collection and processing time. 3D EDS elemental tomography reveals more in-depth information about the defect origin in semiconductor failure analysis. The influence of detector shadowing and X-rays absorption on the EDS tomography's result is also discussed.

  5. Automated curved planar reformation of 3D spine images

    International Nuclear Information System (INIS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-01-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks

  6. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    Science.gov (United States)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  7. Autonomous Planetary 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    is discussed.Based on such features, 3-D representations may be compiled from two or more 2-D satellite images. The main purposes of such a mapping system are extraction of landing sites, objects of scientific interest and general planetary surveying. All data processing is performed autonomously onboard...

  8. Use of a model for 3D image reconstruction

    International Nuclear Information System (INIS)

    Delageniere, S.; Grangeat, P.

    1991-01-01

    We propose a software for 3D image reconstruction in transmission tomography. This software is based on the use of a model and of the RADON algorithm developed at LETI. The introduction of a markovian model helps us to enhance contrast and straitened the natural transitions existing in the objects studied, whereas standard transform methods smoothe them

  9. Anomaly effects of arrays for 3d geoelectrical resistivity imaging ...

    African Journals Online (AJOL)

    user

    The effectiveness of using a net of orthogonal or parallel sets of two-dimensional (2D) profiles for three- dimensional (3D) geoelectrical resistivity imaging has been evaluated. A series of 2D apparent resistivity data were generated over two synthetic models which represent geological or environmental conditions for a ...

  10. Integration von 3D-Kamerasystemen am Gabelstapler

    OpenAIRE

    Kleinert, Steffen; Overmeyer, Ludger

    2013-01-01

    Dieser Beitrag beschreibt die Integration von laufzeitmessenden 3D Kamerasystemen in die Gabelzinkenspitzen eines Flurförderzeugs. Mit Hilfe der integrierten Kameras und deren ausgewerteter Aufnahmen wurde ein Assistenzsystem für die Handhabung von Ladungsträgern realisiert, das dem Fahrer des Flurförderzeugs Verfahrempfehlungen für die Optimierung der Relativposition zwischen Gabelzinken und Ladungsträger bzw. Lagerplatz ausgibt. Neben der Vorstellung der verwendeten Kamera-Hardware und der ...

  11. 3D Hyperpolarized C-13 EPI with Calibrationless Parallel Imaging

    DEFF Research Database (Denmark)

    Gordon, Jeremy W.; Hansen, Rie Beck; Shin, Peter J.

    2018-01-01

    With the translation of metabolic MRI with hyperpolarized 13C agents into the clinic, imaging approaches will require large volumetric FOVs to support clinical applications. Parallel imaging techniques will be crucial to increasing volumetric scan coverage while minimizing RF requirements and tem...... strategies to accelerate and undersample hyperpolarized 13C data using 3D blipped EPI acquisitions and multichannel receive coils, and demonstrated its application in a human study of [1-13C]pyruvate metabolism....

  12. Efficient reconfigurable architectures for 3D medical image compression

    OpenAIRE

    Afandi, Ahmad

    2010-01-01

    This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. Recently, the more widespread use of three-dimensional (3-D) imaging modalities, such as magnetic resonance imaging (MRI), computed tomography (CT), positron emission tomography (PET), and ultrasound (US) have generated a massive amount of volumetric data. These have provided an impetus to the development of other applications, in particular telemedicine and teleradiology. In thes...

  13. 3D Image Fusion to Localise Intercostal Arteries During TEVAR

    Directory of Open Access Journals (Sweden)

    G. Koutouzi

    Full Text Available Purpose: Preservation of intercostal arteries during thoracic aortic procedures reduces the risk of post-operative paraparesis. The origins of the intercostal arteries are visible on pre-operative computed tomography angiography (CTA, but rarely on intra-operative angiography. The purpose of this report is to suggest an image fusion technique for intra-operative localisation of the intercostal arteries during thoracic endovascular repair (TEVAR. Technique: The ostia of the intercostal arteries are identified and manually marked with rings on the pre-operative CTA. The optimal distal landing site in the descending aorta is determined and marked, allowing enough length for an adequate seal and attachment without covering more intercostal arteries than necessary. After 3D/3D fusion of the pre-operative CTA with an intra-operative cone-beam CT (CBCT, the markings are overlaid on the live fluoroscopy screen for guidance. The accuracy of the overlay is confirmed with digital subtraction angiography (DSA and the overlay is adjusted when needed. Stent graft deployment is guided by the markings. The initial experience of this technique in seven patients is presented. Results: 3D image fusion was feasible in all cases. Follow-up CTA after 1 month revealed that all intercostal arteries planned for preservation, were patent. None of the patients developed signs of spinal cord ischaemia. Conclusion: 3D image fusion can be used to localise the intercostal arteries during TEVAR. This may preserve some intercostal arteries and reduce the risk of post-operative spinal cord ischaemia. Keywords: TEVAR, Intercostal artery, Spinal cord ischaemia, 3D image fusion, Image guidance, Cone-beam CT

  14. 3-D repositioning and differential images of volumetric CT measurements

    International Nuclear Information System (INIS)

    Muench, B.; Rueegsegger, P.

    1993-01-01

    In quantitative computed tomography (QCT), time serial measurements are performed to detect a global bone density loss or to identify localized bone density changes. A prerequisite for an unambiguous analysis is the comparison of identical bone volumes. Usually, manual repositioning is too coarse. The authors therefore developed a mathematical procedure that allows matching two three-dimensional image volumes. The algorithm is based on correlation techniques. The procedure has been optimized and applied to computer-tomographic 3-D images of the human knee. It has been tested with both artificially created and in vivo measured image data. Furthermore, typical results of differential images calculated from real bone measurements are presented

  15. Subsurface Profile Mapping using 3-D Compressive Wave Imaging

    Directory of Open Access Journals (Sweden)

    Hazreek Z A M

    2017-01-01

    Full Text Available Geotechnical site investigation related to subsurface profile mapping was commonly performed to provide valuable data for design and construction stage based on conventional drilling techniques. From past experience, drilling techniques particularly using borehole method suffer from limitations related to expensive, time consuming and limited data coverage. Hence, this study performs subsurface profile mapping using 3-D compressive wave imaging in order to minimize those conventional method constraints. Field measurement and data analysis of compressive wave (p-wave, vp was performed using seismic refraction survey (ABEM Terraloc MK 8, 7 kg of sledgehammer and 24 units of vertical geophone and OPTIM (SeisOpt@Picker & SeisOpt@2D software respectively. Then, 3-D compressive wave distribution of subsurface studied was obtained using analysis of SURFER software. Based on 3-D compressive wave image analyzed, it was found that subsurface profile studied consist of three main layers representing top soil (vp = 376 – 600 m/s, weathered material (vp = 900 – 2600 m/s and bedrock (vp > 3000 m/s. Thickness of each layer was varied from 0 – 2 m (first layer, 2 – 20 m (second layer and 20 m and over (third layer. Moreover, groundwater (vp = 1400 – 1600 m/s starts to be detected at 2.0 m depth from ground surface. This study has demonstrated that geotechnical site investigation data related to subsurface profiling was applicable to be obtained using 3-D compressive wave imaging. Furthermore, 3-D compressive wave imaging was performed based on non destructive principle in ground exploration thus consider economic, less time, large data coverage and sustainable to our environment.

  16. Whole-heart coronary MRA with 3D affine motion correction using 3D image-based navigation.

    Science.gov (United States)

    Henningsson, Markus; Prieto, Claudia; Chiribiri, Amedeo; Vaillant, Ghislain; Razavi, Reza; Botnar, René M

    2014-01-01

    Robust motion correction is necessary to minimize respiratory motion artefacts in coronary MR angiography (CMRA). The state-of-the-art method uses a 1D feet-head translational motion correction approach, and data acquisition is limited to a small window in the respiratory cycle, which prolongs the scan by a factor of 2-3. The purpose of this work was to implement 3D affine motion correction for Cartesian whole-heart CMRA using a 3D navigator (3D-NAV) to allow for data acquisition throughout the whole respiratory cycle. 3D affine transformations for different respiratory states (bins) were estimated by using 3D-NAV image acquisitions which were acquired during the startup profiles of a steady-state free precession sequence. The calculated 3D affine transformations were applied to the corresponding high-resolution Cartesian image acquisition which had been similarly binned, to correct for respiratory motion between bins. Quantitative and qualitative comparisons showed no statistical difference between images acquired with the proposed method and the reference method using a diaphragmatic navigator with a narrow gating window. We demonstrate that 3D-NAV and 3D affine correction can be used to acquire Cartesian whole-heart 3D coronary artery images with 100% scan efficiency with similar image quality as with the state-of-the-art gated and corrected method with approximately 50% scan efficiency. Copyright © 2013 Wiley Periodicals, Inc.

  17. Tipping solutions: emerging 3D nano-fabrication/ -imaging technologies

    Directory of Open Access Journals (Sweden)

    Seniutinas Gediminas

    2017-06-01

    Full Text Available The evolution of optical microscopy from an imaging technique into a tool for materials modification and fabrication is now being repeated with other characterization techniques, including scanning electron microscopy (SEM, focused ion beam (FIB milling/imaging, and atomic force microscopy (AFM. Fabrication and in situ imaging of materials undergoing a three-dimensional (3D nano-structuring within a 1−100 nm resolution window is required for future manufacturing of devices. This level of precision is critically in enabling the cross-over between different device platforms (e.g. from electronics to micro-/nano-fluidics and/or photonics within future devices that will be interfacing with biological and molecular systems in a 3D fashion. Prospective trends in electron, ion, and nano-tip based fabrication techniques are presented.

  18. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  19. 3D Inkjet Printed Helical Antenna with Integrated Lens

    KAUST Repository

    Farooqui, Muhammad Fahad

    2016-08-30

    The gain of an antenna can be enhanced through the integration of a lens, although this technique has traditionally been restricted to planar antennas due to fabrication limitations of standard manufacturing processes. Here, through a unique combination of 3D and 2D inkjet printing of dielectric and metallic inks respectively, we demonstrate a lens that has been monolithically integrated to a non-planar antenna (helix) for the first time. Antenna measurements show that the integration of a Fresnel lens enhances the gain of a 2-turn helix by around 4.6 dB, which provides a peak gain of about 12.9 dBi at 8.8 GHz. The 3-dB axial ratio (AR) bandwidth of the antenna with the lens is 5.5%. This work also reports the complete characterization of this new process in terms of minimum features sizes and achievable conductivities. Due to monolithic integration of the lens through a fully printed process, this antenna configuration offers high gain performance by using a low cost and rapid fabrication technique. © 2016 IEEE.

  20. Combining Different Modalities for 3D Imaging of Biological Objects

    CERN Document Server

    Tsyganov, E; Kulkarni, P; Mason, R; Parkey, R; Seliuonine, S; Shay, J; Soesbe, T; Zhezher, V; Zinchenko, A I

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a $^{57}$Co source and $^{98m}$Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown in this paper, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. ...

  1. Combining different modalities for 3D imaging of biological objects

    International Nuclear Information System (INIS)

    Tsyganov, Eh.; Antich, P.; Kulkarni, P.; Mason, R.; Parkey, R.; Seliuonine, S.; Shay, J.; Soesbe, T.; Zhezher, V.; Zinchenko, A.

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a 57 Co source and 98m Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. This structural information can provide even more detail if the x-ray tomography is used as presented in the paper

  2. 3D shape recovery from image focus using Gabor features

    Science.gov (United States)

    Mahmood, Fahad; Mahmood, Jawad; Zeb, Ayesha; Iqbal, Javaid

    2018-04-01

    Recovering an accurate and precise depth map from a set of acquired 2-D image dataset of the target object each having different focus information is an ultimate goal of 3-D shape recovery. Focus measure algorithm plays an important role in this architecture as it converts the corresponding color value information into focus information which will be then utilized for recovering depth map. This article introduces Gabor features as focus measure approach for recovering depth map from a set of 2-D images. Frequency and orientation representation of Gabor filter features is similar to human visual system and normally applied for texture representation. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach, in spite of simplicity, generates accurate results.

  3. [3D imaging benefits in clinical pratice of orthodontics].

    Science.gov (United States)

    Frèrejouand, Emmanuel

    2016-12-01

    3D imaging possibilities raised up in the last few years in the orthodontic field. In 2016, it can be used for diagnosis improvement and treatment planning by using digital set up combined to CBCT. It is relevant for orthodontic mechanic updating by creating visible or invisible customised appliances. It forms the basis of numerous scientific researches. The author explains the progress 3D imaging brings to diagnosis and clinics but also highlights the requirements it creates. The daily use of these processes in orthodontic clinical practices needs to be regulated regarding the benefit/risk ratio and the patient satisfaction. The command of the digital work flow created by these technics requires habits modifications from the orthodontist and his staff. © EDP Sciences, SFODF, 2016.

  4. Integration of DYN3D inside the NURESIM platform

    International Nuclear Information System (INIS)

    Gomez T, A. M.; Sanchez E, V. H.; Kliem, S.; Gommlich, A.; Rohde, U.

    2010-10-01

    The NURISP project (Nuclear Reactor Integrated Simulation Project) is focused on the further development of the European Nuclear Reactor Simulation (NURESIM) platform for advanced numerical reactor design and safety analysis tools. NURESIM is based on an open source platform - called SALOME - that offers flexible and powerful capabilities for pre- and post processing as well as for coupling of multi-physics and multi-scale solutions. The developments within the NURISP project are concentrated in the areas of reactors, physics, thermal hydraulics, multi-physics, and sensitivity and uncertainty methodologies. The aim is to develop experimentally validated advanced simulation tools including capabilities for uncertainty and sensitivity quantification. A unique feature of NURESIM is the flexibility in selecting the solvers for the area of interest and the interpolation and mapping schemes according to the problem under consideration. The Sub Project 3 (S P3) of NURISP is focused on the development of multi-physics methodologies at different scales and covering different physical fields (neutronics, thermal hydraulics and pin mechanics). One of the objectives of S P3 is the development of multi-physics methodologies beyond the state-of-the-art for improved prediction of local safety margins and design at pin-by-pin scale. The Karlsruhe Institute of Technology and the Research Center Dresden-Rossendorf are involved in the integration of the reactor dynamics code DYN3D into the SALOME platform for coupling with a thermal hydraulic sub-channel code (FLICA4) at fuel assembly and pin level. In this paper, the main capabilities of the SALOME platform, the steps for the integration process of DYN3D as well as selected preliminary results obtained for the DYN3D/FLICA4 coupling are presented and discussed. Finally the next steps for the validation of the coupling scheme at fuel assembly and pin basis are given. (Author)

  5. Hybrid animation integrating 2D and 3D assets

    CERN Document Server

    O'Hailey, Tina

    2010-01-01

    Artist imaginations continue to grow and stretch the boundaries of traditional animation. Successful animators adept and highly skilled in traditional animation mediums are branching out beyond traditional animation workflows and will often use multiple forms of animation in a single project. With the knowledge of 3D and 2D assets and the integration of multiple animation mediums into a single project, animators have a wealth of creative resources available for a project that is not limited to a specific animation medium, software package or workflow processs. Enhance a poignant scene by choos

  6. Utilization of multiple frequencies in 3D nonlinear microwave imaging

    DEFF Research Database (Denmark)

    Jensen, Peter Damsgaard; Rubæk, Tonny; Mohr, Johan Jacob

    2012-01-01

    The use of multiple frequencies in a nonlinear microwave algorithm is considered. Using multiple frequencies allows for obtaining the improved resolution available at the higher frequencies while retaining the regularizing effects of the lower frequencies. However, a number of different challenges...... at lower frequencies are used as starting guesses for reconstructions at higher frequencies. The performance is illustrated using simulated 2-D data and data obtained with the 3-D DTU microwave imaging system....

  7. A novel 3D imaging system for strawberry phenotyping

    Directory of Open Access Journals (Sweden)

    Joe Q. He

    2017-11-01

    Full Text Available Abstract Background Accurate and quantitative phenotypic data in plant breeding programmes is vital in breeding to assess the performance of genotypes and to make selections. Traditional strawberry phenotyping relies on the human eye to assess most external fruit quality attributes, which is time-consuming and subjective. 3D imaging is a promising high-throughput technique that allows multiple external fruit quality attributes to be measured simultaneously. Results A low cost multi-view stereo (MVS imaging system was developed, which captured data from 360° around a target strawberry fruit. A 3D point cloud of the sample was derived and analysed with custom-developed software to estimate berry height, length, width, volume, calyx size, colour and achene number. Analysis of these traits in 100 fruits showed good concordance with manual assessment methods. Conclusion This study demonstrates the feasibility of an MVS based 3D imaging system for the rapid and quantitative phenotyping of seven agronomically important external strawberry traits. With further improvement, this method could be applied in strawberry breeding programmes as a cost effective phenotyping technique.

  8. A novel 3D imaging system for strawberry phenotyping.

    Science.gov (United States)

    He, Joe Q; Harrison, Richard J; Li, Bo

    2017-01-01

    Accurate and quantitative phenotypic data in plant breeding programmes is vital in breeding to assess the performance of genotypes and to make selections. Traditional strawberry phenotyping relies on the human eye to assess most external fruit quality attributes, which is time-consuming and subjective. 3D imaging is a promising high-throughput technique that allows multiple external fruit quality attributes to be measured simultaneously. A low cost multi-view stereo (MVS) imaging system was developed, which captured data from 360° around a target strawberry fruit. A 3D point cloud of the sample was derived and analysed with custom-developed software to estimate berry height, length, width, volume, calyx size, colour and achene number. Analysis of these traits in 100 fruits showed good concordance with manual assessment methods. This study demonstrates the feasibility of an MVS based 3D imaging system for the rapid and quantitative phenotyping of seven agronomically important external strawberry traits. With further improvement, this method could be applied in strawberry breeding programmes as a cost effective phenotyping technique.

  9. Automated Identification of Fiducial Points on 3D Torso Images

    Directory of Open Access Journals (Sweden)

    Manas M. Kawale

    2013-01-01

    Full Text Available Breast reconstruction is an important part of the breast cancer treatment process for many women. Recently, 2D and 3D images have been used by plastic surgeons for evaluating surgical outcomes. Distances between different fiducial points are frequently used as quantitative measures for characterizing breast morphology. Fiducial points can be directly marked on subjects for direct anthropometry, or can be manually marked on images. This paper introduces novel algorithms to automate the identification of fiducial points in 3D images. Automating the process will make measurements of breast morphology more reliable, reducing the inter- and intra-observer bias. Algorithms to identify three fiducial points, the nipples, sternal notch, and umbilicus, are described. The algorithms used for localization of these fiducial points are formulated using a combination of surface curvature and 2D color information. Comparison of the 3D coordinates of automatically detected fiducial points and those identified manually, and geodesic distances between the fiducial points are used to validate algorithm performance. The algorithms reliably identified the location of all three of the fiducial points. We dedicate this article to our late colleague and friend, Dr. Elisabeth K. Beahm. Elisabeth was both a talented plastic surgeon and physician-scientist; we deeply miss her insight and her fellowship.

  10. Pavement cracking measurements using 3D laser-scan images

    International Nuclear Information System (INIS)

    Ouyang, W; Xu, B

    2013-01-01

    Pavement condition surveying is vital for pavement maintenance programs that ensure ride quality and traffic safety. This paper first introduces an automated pavement inspection system which uses a three-dimensional (3D) camera and a structured laser light to acquire dense transverse profiles of a pavement lane surface when it carries a moving vehicle. After the calibration, the 3D system can yield a depth resolution of 0.5 mm and a transverse resolution of 1.56 mm pixel −1 at 1.4 m camera height from the ground. The scanning rate of the camera can be set to its maximum at 5000 lines s −1 , allowing the density of scanned profiles to vary with the vehicle's speed. The paper then illustrates the algorithms that utilize 3D information to detect pavement distress, such as transverse, longitudinal and alligator cracking, and presents the field tests on the system's repeatability when scanning a sample pavement in multiple runs at the same vehicle speed, at different vehicle speeds and under different weather conditions. The results show that this dedicated 3D system can capture accurate pavement images that detail surface distress, and obtain consistent crack measurements in repeated tests and under different driving and lighting conditions. (paper)

  11. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    Science.gov (United States)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  12. Tunable quantum interference in a 3D integrated circuit.

    Science.gov (United States)

    Chaboyer, Zachary; Meany, Thomas; Helt, L G; Withford, Michael J; Steel, M J

    2015-04-27

    Integrated photonics promises solutions to questions of stability, complexity, and size in quantum optics. Advances in tunable and non-planar integrated platforms, such as laser-inscribed photonics, continue to bring the realisation of quantum advantages in computation and metrology ever closer, perhaps most easily seen in multi-path interferometry. Here we demonstrate control of two-photon interference in a chip-scale 3D multi-path interferometer, showing a reduced periodicity and enhanced visibility compared to single photon measurements. Observed non-classical visibilities are widely tunable, and explained well by theoretical predictions based on classical measurements. With these predictions we extract Fisher information approaching a theoretical maximum. Our results open a path to quantum enhanced phase measurements.

  13. 3D super-resolution imaging with blinking quantum dots

    Science.gov (United States)

    Wang, Yong; Fruhwirth, Gilbert; Cai, En; Ng, Tony; Selvin, Paul R.

    2013-01-01

    Quantum dots are promising candidates for single molecule imaging due to their exceptional photophysical properties, including their intense brightness and resistance to photobleaching. They are also notorious for their blinking. Here we report a novel way to take advantage of quantum dot blinking to develop an imaging technique in three-dimensions with nanometric resolution. We first applied this method to simulated images of quantum dots, and then to quantum dots immobilized on microspheres. We achieved imaging resolutions (FWHM) of 8–17 nm in the x-y plane and 58 nm (on coverslip) or 81 nm (deep in solution) in the z-direction, approximately 3–7 times better than what has been achieved previously with quantum dots. This approach was applied to resolve the 3D distribution of epidermal growth factor receptor (EGFR) molecules at, and inside of, the plasma membrane of resting basal breast cancer cells. PMID:24093439

  14. An Algorithm for Fast Computation of 3D Zernike Moments for Volumetric Images

    OpenAIRE

    Hosny, Khalid M.; Hafez, Mohamed A.

    2012-01-01

    An algorithm was proposed for very fast and low-complexity computation of three-dimensional Zernike moments. The 3D Zernike moments were expressed in terms of exact 3D geometric moments where the later are computed exactly through the mathematical integration of the monomial terms over the digital image/object voxels. A new symmetry-based method was proposed to compute 3D Zernike moments with 87% reduction in the computational complexity. A fast 1D cascade algorithm was also employed to add m...

  15. Integrating Instrumental Data Provides the Full Science in 3D

    Science.gov (United States)

    Turrin, M.; Boghosian, A.; Bell, R. E.; Frearson, N.

    2017-12-01

    Looking at data sparks questions, discussion and insights. By integrating multiple data sets we deepen our understanding of how cryosphere processes operate. Field collected data provide measurements from multiple instruments supporting rapid insights. Icepod provides a platform focused on the integration of multiple instruments. Over the last three seasons, the ROSETTA-Ice project has deployed Icepod to comprehensively map the Ross Ice Shelf, Antarctica. This integrative data collection along with new methods of data visualization allows us to answer questions about ice shelf structure and evolution that arise during data processing and review. While data are vetted and archived in the field to confirm instruments are operating, upon return to the lab data are again reviewed for accuracy before full analysis. Recent review of shallow ice radar data from the Beardmore Glacier, an outlet glacier into the Ross Ice Shelf, presented an abrupt discontinuity in the ice surface. This sharp 8m surface elevation drop was originally interpreted as a processing error. Data were reexamined, integrating the simultaneously collected shallow and deep ice radar with lidar data. All the data sources showed the surface discontinuity, confirming the abrupt 8m drop in surface elevation. Examining high resolution WorldView satellite imagery revealed a persistent source for these elevation drops. The satellite imagery showed that this tear in the ice surface was only one piece of a larger pattern of "chatter marks" in ice that flows at a rate of 300 m/yr. The markings are buried over a distance of 30 km or after 100 years of travel down Beardmore Glacier towards the front of the Ross Ice Shelf. Using Icepod's lidar and cameras we map this chatter mark feature in 3D to reveal its full structure. We use digital elevation models from WorldView to map the other along flow chatter marks. In order to investigate the relationship between these surface features and basal crevasses, the deep ice

  16. Reconstruction, Processing and Display of 3D-Images

    International Nuclear Information System (INIS)

    Lenz, R.

    1986-01-01

    In the last few years a number of methods have been developed which can produce true 3D images, volumes of density values. We review two of these techniques (confocal microscopy and X-ray tomography) which were used in the reconstruction of some of our images. The other images came from transmission electron microscopes, gammacameras and magnetic resonance scanners. A new algorithm is suggested which uses projection onto convex sets to improve the depth resolution in the microscopy case. Since we use a TV-monitor as display device we have to project 3D volumes to 2D images. We use the following type of projections: reprojections, range images, colorcoded depth and shaded surface displays. Shaded surface displays use the surface gradient to compute the gray value in the projection. We describe how this gradient can be computed from the range image and from the original density volume. Normally we compute a whole series of projections where the volume is rotated some degrees between two projections. In a separate display session we can display these images in stereo and motion. We describe how noise reduction filters, gray value transformations, geometric manipulations, gradient filters, texture filters and binary techniques can be used to remove uninteresting points from the volume. Finally, a filter design strategy is developed which is based on the optimal basis function approach by Hummel. We show that for a large class of patterns, in images of arbitrary dimensions, the optimal basis functions are rotation-invariant operators as introduced by Danielsson in the 2D case. We also describe how the orientation of a pattern can be computed from its feature vector. (With 107 refs.) (author)

  17. IMAGE TO POINT CLOUD METHOD OF 3D-MODELING

    Directory of Open Access Journals (Sweden)

    A. G. Chibunichev

    2012-07-01

    Full Text Available This article describes the method of constructing 3D models of objects (buildings, monuments based on digital images and a point cloud obtained by terrestrial laser scanner. The first step is the automated determination of exterior orientation parameters of digital image. We have to find the corresponding points of the image and point cloud to provide this operation. Before the corresponding points searching quasi image of point cloud is generated. After that SIFT algorithm is applied to quasi image and real image. SIFT algorithm allows to find corresponding points. Exterior orientation parameters of image are calculated from corresponding points. The second step is construction of the vector object model. Vectorization is performed by operator of PC in an interactive mode using single image. Spatial coordinates of the model are calculated automatically by cloud points. In addition, there is automatic edge detection with interactive editing available. Edge detection is performed on point cloud and on image with subsequent identification of correct edges. Experimental studies of the method have demonstrated its efficiency in case of building facade modeling.

  18. A filtering approach to image reconstruction in 3D SPECT

    International Nuclear Information System (INIS)

    Bronnikov, Andrei V.

    2000-01-01

    We present a new approach to three-dimensional (3D) image reconstruction using analytical inversion of the exponential divergent beam transform, which can serve as a mathematical model for cone-beam 3D SPECT imaging. We apply a circular cone-beam scan and assume constant attenuation inside a convex area with a known boundary, which is satisfactory in brain imaging. The reconstruction problem is reduced to an image restoration problem characterized by a shift-variant point spread function which is given analytically. The method requires two computation steps: backprojection and filtering. The modulation transfer function (MTF) of the filter is derived by means of an original methodology using the 2D Laplace transform. The filter is implemented in the frequency domain and requires 2D Fourier transform of transverse slices. In order to obtain a shift-invariant cone-beam projection-backprojection operator we resort to an approximation, assuming that the collimator has a relatively large focal length. Nevertheless, numerical experiments demonstrate surprisingly good results for detectors with relatively short focal lengths. The use of a wavelet-based filtering algorithm greatly improves the stability to Poisson noise. (author)

  19. Flash trajectory imaging of target 3D motion

    Science.gov (United States)

    Wang, Xinwei; Zhou, Yan; Fan, Songtao; He, Jun; Liu, Yuliang

    2011-03-01

    We present a flash trajectory imaging technique which can directly obtain target trajectory and realize non-contact measurement of motion parameters by range-gated imaging and time delay integration. Range-gated imaging gives the range of targets and realizes silhouette detection which can directly extract targets from complex background and decrease the complexity of moving target image processing. Time delay integration increases information of one single frame of image so that one can directly gain the moving trajectory. In this paper, we have studied the algorithm about flash trajectory imaging and performed initial experiments which successfully obtained the trajectory of a falling badminton. Our research demonstrates that flash trajectory imaging is an effective approach to imaging target trajectory and can give motion parameters of moving targets.

  20. Motion robust high resolution 3D free-breathing pulmonary MRI using dynamic 3D image self-navigator.

    Science.gov (United States)

    Jiang, Wenwen; Ong, Frank; Johnson, Kevin M; Nagle, Scott K; Hope, Thomas A; Lustig, Michael; Larson, Peder E Z

    2018-06-01

    To achieve motion robust high resolution 3D free-breathing pulmonary MRI utilizing a novel dynamic 3D image navigator derived directly from imaging data. Five-minute free-breathing scans were acquired with a 3D ultrashort echo time (UTE) sequence with 1.25 mm isotropic resolution. From this data, dynamic 3D self-navigating images were reconstructed under locally low rank (LLR) constraints and used for motion compensation with one of two methods: a soft-gating technique to penalize the respiratory motion induced data inconsistency, and a respiratory motion-resolved technique to provide images of all respiratory motion states. Respiratory motion estimation derived from the proposed dynamic 3D self-navigator of 7.5 mm isotropic reconstruction resolution and a temporal resolution of 300 ms was successful for estimating complex respiratory motion patterns. This estimation improved image quality compared to respiratory belt and DC-based navigators. Respiratory motion compensation with soft-gating and respiratory motion-resolved techniques provided good image quality from highly undersampled data in volunteers and clinical patients. An optimized 3D UTE sequence combined with the proposed reconstruction methods can provide high-resolution motion robust pulmonary MRI. Feasibility was shown in patients who had irregular breathing patterns in which our approach could depict clinically relevant pulmonary pathologies. Magn Reson Med 79:2954-2967, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  1. Estimation of regional lung expansion via 3D image registration

    Science.gov (United States)

    Pan, Yan; Kumar, Dinesh; Hoffman, Eric A.; Christensen, Gary E.; McLennan, Geoffrey; Song, Joo Hyun; Ross, Alan; Simon, Brett A.; Reinhardt, Joseph M.

    2005-04-01

    A method is described to estimate regional lung expansion and related biomechanical parameters using multiple CT images of the lungs, acquired at different inflation levels. In this study, the lungs of two sheep were imaged utilizing a multi-detector row CT at different lung inflations in the prone and supine positions. Using the lung surfaces and the airway branch points for guidance, a 3D inverse consistent image registration procedure was used to match different lung volumes at each orientation. The registration was validated using a set of implanted metal markers. After registration, the Jacobian of the deformation field was computed to express regional expansion or contraction. The regional lung expansion at different pressures and different orientations are compared.

  2. Multiplexed phase-space imaging for 3D fluorescence microscopy.

    Science.gov (United States)

    Liu, Hsiou-Yuan; Zhong, Jingshan; Waller, Laura

    2017-06-26

    Optical phase-space functions describe spatial and angular information simultaneously; examples of optical phase-space functions include light fields in ray optics and Wigner functions in wave optics. Measurement of phase-space enables digital refocusing, aberration removal and 3D reconstruction. High-resolution capture of 4D phase-space datasets is, however, challenging. Previous scanning approaches are slow, light inefficient and do not achieve diffraction-limited resolution. Here, we propose a multiplexed method that solves these problems. We use a spatial light modulator (SLM) in the pupil plane of a microscope in order to sequentially pattern multiplexed coded apertures while capturing images in real space. Then, we reconstruct the 3D fluorescence distribution of our sample by solving an inverse problem via regularized least squares with a proximal accelerated gradient descent solver. We experimentally reconstruct a 101 Megavoxel 3D volume (1010×510×500µm with NA 0.4), demonstrating improved acquisition time, light throughput and resolution compared to scanning aperture methods. Our flexible patterning scheme further allows sparsity in the sample to be exploited for reduced data capture.

  3. Image-Based 3D Face Modeling System

    Directory of Open Access Journals (Sweden)

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  4. Ultra-realistic 3-D imaging based on colour holography

    International Nuclear Information System (INIS)

    Bjelkhagen, H I

    2013-01-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  5. 3-D brain image registration using optimal morphological processing

    International Nuclear Information System (INIS)

    Loncaric, S.; Dhawan, A.P.

    1994-01-01

    The three-dimensional (3-D) registration of Magnetic Resonance (MR) and Positron Emission Tomographic (PET) images of the brain is important for analysis of the human brain and its diseases. A procedure for optimization of (3-D) morphological structuring elements, based on a genetic algorithm, is presented in the paper. The registration of the MR and PET images is done by means of a registration procedure in two major phases. In the first phase, the Iterative Principal Axis Transform (IPAR) is used for initial registration. In the second phase, the optimal shape description method based on the Morphological Signature Transform (MST) is used for final registration. The morphological processing is used to improve the accuracy of the basic IPAR method. The brain ventricle is used as a landmark for MST registration. A near-optimal structuring element obtained by means of a genetic algorithm is used in MST to describe the shape of the ventricle. The method has been tested on the set of brain images demonstrating the feasibility of approach. (author). 11 refs., 3 figs

  6. 3D integrated HYDRA simulations of hohlraums including fill tubes

    Science.gov (United States)

    Marinak, M. M.; Milovich, J.; Hammel, B. A.; Macphee, A. G.; Smalyuk, V. A.; Kerbel, G. D.; Sepke, S.; Patel, M. V.

    2017-10-01

    Measurements of fill tube perturbations from hydro growth radiography (HGR) experiments on the National Ignition Facility show spoke perturbations in the ablator radiating from the base of the tube. These correspond to the shadow of the 10 μm diameter glass fill tube cast by hot spots at early time. We present 3D integrated HYDRA simulations of these experiments which include the fill tube. Meshing techniques are described which were employed to resolve the fill tube structure and associated perturbations in the simulations. We examine the extent to which the specific illumination geometry necessary to accommodate a backlighter in the HGR experiment contributes to the spoke pattern. Simulations presented include high resolution calculations run on the Trinity machine operated by the Alliance for Computing at Extreme Scale (ACES) partnership. This work was performed under the auspices of the Lawrence Livermore National Security, LLC, (LLNS) under Contract No. DE-AC52-07NA27344.

  7. 3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques

    Directory of Open Access Journals (Sweden)

    D. Abate

    2014-06-01

    Full Text Available The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc., as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc., can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy. The goal of the project is the multi-temporal 3D documentation and monitoring of paintings – at the moment in bad conservation’s situation - and the provision of some metrics to quantify the deformations and damages.

  8. 2D imaging and 3D sensing data acquisition and mutual registration for painting conservation

    Science.gov (United States)

    Fontana, Raffaella; Gambino, Maria Chiara; Greco, Marinella; Marras, Luciano; Pampaloni, Enrico M.; Pelagotti, Anna; Pezzati, Luca; Poggi, Pasquale

    2005-01-01

    We describe the application of 2D and 3D data acquisition and mutual registration to the conservation of paintings. RGB color image acquisition, IR and UV fluorescence imaging, together with the more recent hyperspectral imaging (32 bands) are among the most useful techniques in this field. They generally are meant to provide information on the painting materials, on the employed techniques and on the object state of conservation. However, only when the various images are perfectly registered on each other and on the 3D model, no ambiguity is possible and safe conclusions may be drawn. We present the integration of 2D and 3D measurements carried out on two different paintings: "Madonna of the Yarnwinder" by Leonardo da Vinci, and "Portrait of Lionello d'Este", by Pisanello, both painted in the XV century.

  9. Single-breath-hold 3-D CINE imaging of the left ventricle using Cartesian sampling.

    Science.gov (United States)

    Wetzl, Jens; Schmidt, Michaela; Pontana, François; Longère, Benjamin; Lugauer, Felix; Maier, Andreas; Hornegger, Joachim; Forman, Christoph

    2018-02-01

    Our objectives were to evaluate a single-breath-hold approach for Cartesian 3-D CINE imaging of the left ventricle with a nearly isotropic resolution of [Formula: see text] and a breath-hold duration of [Formula: see text]19 s against a standard stack of 2-D CINE slices acquired in multiple breath-holds. Validation is performed with data sets from ten healthy volunteers. A Cartesian sampling pattern based on the spiral phyllotaxis and a compressed sensing reconstruction method are proposed to allow 3-D CINE imaging with high acceleration factors. The fully integrated reconstruction uses multiple graphics processing units to speed up the reconstruction. The 2-D CINE and 3-D CINE are compared based on ventricular function parameters, contrast-to-noise ratio and edge sharpness measurements. Visual comparisons of corresponding short-axis slices of 2-D and 3-D CINE show an excellent match, while 3-D CINE also allows reformatting to other orientations. Ventricular function parameters do not significantly differ from values based on 2-D CINE imaging. Reconstruction times are below 4 min. We demonstrate single-breath-hold 3-D CINE imaging in volunteers and three example patient cases, which features fast reconstruction and allows reformatting to arbitrary orientations.

  10. 3D MODEL GENERATION USING OBLIQUE IMAGES ACQUIRED BY UAV

    Directory of Open Access Journals (Sweden)

    A. Lingua

    2017-07-01

    Full Text Available In recent years, many studies revealed the advantages of using airborne oblique images for obtaining improved 3D city models (including façades and building footprints. Here the acquisition and use of oblique images from a low cost and open source Unmanned Aerial Vehicle (UAV for the 3D high-level-of-detail reconstruction of historical architectures is evaluated. The critical issues of such acquisitions (flight planning strategies, ground control points distribution, etc. are described. Several problems should be considered in the flight planning: best approach to cover the whole object with the minimum time of flight; visibility of vertical structures; occlusions due to the context; acquisition of all the parts of the objects (the closest and the farthest with similar resolution; suitable camera inclination, and so on. In this paper a solution is proposed in order to acquire oblique images with one only flight. The data processing was realized using Structure-from-Motion-based approach for point cloud generation using dense image-matching algorithms implemented in an open source software. The achieved results are analysed considering some check points and some reference LiDAR data. The system was tested for surveying a historical architectonical complex: the “Sacro Mo nte di Varallo Sesia” in north-west of Italy. This study demonstrates that the use of oblique images acquired from a low cost UAV system and processed through an open source software is an effective methodology to survey cultural heritage, characterized by limited accessibility, need for detail and rapidity of the acquisition phase, and often reduced budgets.

  11. Automatic segmentation of MRI head images by 3-D region growing method which utilizes edge information

    International Nuclear Information System (INIS)

    Jiang, Hao; Suzuki, Hidetomo; Toriwaki, Jun-ichiro

    1991-01-01

    This paper presents a 3-D segmentation method that automatically extracts soft tissue from multi-sliced MRI head images. MRI produces a sequence of two-dimensional (2-D) images which contains three-dimensional (3-D) information of organs. To utilize such information we need effective algorithms to treat 3-D digital images and to extract organs and tissues of interest. We developed a method to extract the brain from MRI images which uses a region growing procedure and integrates information of uniformity of gray levels and information of the presence of edge segments in the local area around the pixel of interest. First we generate a kernel region which is a part of brain tissue by simple thresholding. Then we grow the region by means of a region growing algorithm under the control of 3-D edge existence to obtain the region of the brain. Our method is rather simple because it uses basic 3-D image processing techniques like spatial difference. It is robust for variation of gray levels inside a tissue since it also refers to the edge information in the process of region growing. Therefore, the method is flexible enough to be applicable to the segmentation of other images including soft tissues which have complicated shapes and fluctuation in gray levels. (author)

  12. Extracting 3D layout from a single image using global image structures.

    Science.gov (United States)

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation.

  13. Recent progress in 3-D imaging of sea freight containers

    International Nuclear Information System (INIS)

    Fuchs, Theobald; Schön, Tobias; Sukowski, Frank; Dittmann, Jonas; Hanke, Randolf

    2015-01-01

    The inspection of very large objects like sea freight containers with X-ray Computed Tomography (CT) is an emerging technology. A complete 3-D CT scan of a see-freight container takes several hours. Of course, this is too slow to apply it to a large number of containers. However, the benefits of a 3-D CT for sealed freight are obvious: detection of potential threats or illicit cargo without being confronted with legal complications or high time consumption and risks for the security personnel during a manual inspection. Recently distinct progress was made in the field of reconstruction of projections with only a relatively low number of angular positions. Instead of today’s 500 to 1000 rotational steps, as needed for conventional CT reconstruction techniques, this new class of algorithms provides the potential to reduce the number of projection angles approximately by a factor of 10. The main drawback of these advanced iterative methods is the high consumption for numerical processing. But as computational power is getting steadily cheaper, there will be practical applications of these complex algorithms in a foreseeable future. In this paper, we discuss the properties of iterative image reconstruction algorithms and show results of their application to CT of extremely large objects scanning a sea-freight container. A specific test specimen is used to quantitatively evaluate the image quality in terms of spatial and contrast resolution and depending on different number of projections

  14. Acquiring 3D scene information from 2D images

    NARCIS (Netherlands)

    Li, Ping

    2011-01-01

    In recent years, people are becoming increasingly acquainted with 3D technologies such as 3DTV, 3D movies and 3D virtual navigation of city environments in their daily life. Commercial 3D movies are now commonly available for consumers. Virtual navigation of our living environment as used on a

  15. High Resolution 3D Radar Imaging of Comet Interiors

    Science.gov (United States)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D

  16. CT-image-based conformal brachytherapy of breast cancer. The significance of semi-3-D and 3-D treatment planning.

    Science.gov (United States)

    Polgár, C; Major, T; Somogyi, A; Takácsi-Nagy, Z; Mangel, L C; Forrai, G; Sulyok, Z; Fodor, J; Németh, G

    2000-03-01

    To compare the conventional 2-D, the simulator-guided semi-3-D and the recently developed CT-guided 3-D brachytherapy treatment planning in the interstitial radiotherapy of breast cancer. In 103 patients with T1-2, N0-1 breast cancer the tumor bed was clipped during breast conserving surgery. Fifty-two of them received boost brachytherapy after 46 to 50 Gy teletherapy and 51 patients were treated with brachytherapy alone via flexible implant tubes. Single, double and triple plane implant was used in 6, 89 and 8 cases, respectively. The dose of boost brachytherapy and sole brachytherapy prescribed to dose reference points was 3 times 4.75 Gy and 7 times 5.2 Gy, respectively. The positions of dose reference points varied according to the level (2-D, semi-3-D and 3-D) of treatment planning performed. The treatment planning was based on the 3-D reconstruction of the surgical clips, implant tubes and skin points. In all cases the implantations were planned with a semi-3-D technique aided by simulator. In 10 cases a recently developed CT-guided 3-D planning system was used. The semi-3-D and 3-D treatment plans were compared to hypothetical 2-D plans using dose-volume histograms and dose non-uniformity ratios. The values of mean central dose, mean skin dose, minimal clip dose, proportion of underdosaged clips and mean target surface dose were evaluated. The accuracy of tumor bed localization and the conformity of planning target volume and treated volume were also analyzed in each technique. With the help of conformal semi-3-D and 3-D brachytherapy planning we could define reference dose points, active source positions and dwell times individually. This technique decreased the mean skin dose with 22.2% and reduced the possibility of geographical miss. We could achieve the best conformity between the planning target volume and the treated volume with the CT-image based 3-D treatment planning, at the cost of worse dose homogeneity. The mean treated volume was reduced by 25

  17. 3D Image Analysis of Geomaterials using Confocal Microscopy

    Science.gov (United States)

    Mulukutla, G.; Proussevitch, A.; Sahagian, D.

    2009-05-01

    Confocal microscopy is one of the most significant advances in optical microscopy of the last century. It is widely used in biological sciences but its application to geomaterials lingers due to a number of technical problems. Potentially the technique can perform non-invasive testing on a laser illuminated sample that fluoresces using a unique optical sectioning capability that rejects out-of-focus light reaching the confocal aperture. Fluorescence in geomaterials is commonly induced using epoxy doped with a fluorochrome that is impregnated into the sample to enable discrimination of various features such as void space or material boundaries. However, for many geomaterials, this method cannot be used because they do not naturally fluoresce and because epoxy cannot be impregnated into inaccessible parts of the sample due to lack of permeability. As a result, the confocal images of most geomaterials that have not been pre-processed with extensive sample preparation techniques are of poor quality and lack the necessary image and edge contrast necessary to apply any commonly used segmentation techniques to conduct any quantitative study of its features such as vesicularity, internal structure, etc. In our present work, we are developing a methodology to conduct a quantitative 3D analysis of images of geomaterials collected using a confocal microscope with minimal amount of prior sample preparation and no addition of fluorescence. Two sample geomaterials, a volcanic melt sample and a crystal chip containing fluid inclusions are used to assess the feasibility of the method. A step-by-step process of image analysis includes application of image filtration to enhance the edges or material interfaces and is based on two segmentation techniques: geodesic active contours and region competition. Both techniques have been applied extensively to the analysis of medical MRI images to segment anatomical structures. Preliminary analysis suggests that there is distortion in the

  18. Learning from graphically integrated 2D and 3D representations improves retention of neuroanatomy

    Science.gov (United States)

    Naaz, Farah

    Visualizations in the form of computer-based learning environments are highly encouraged in science education, especially for teaching spatial material. Some spatial material, such as sectional neuroanatomy, is very challenging to learn. It involves learning the two dimensional (2D) representations that are sampled from the three dimensional (3D) object. In this study, a computer-based learning environment was used to explore the hypothesis that learning sectional neuroanatomy from a graphically integrated 2D and 3D representation will lead to better learning outcomes than learning from a sequential presentation. The integrated representation explicitly demonstrates the 2D-3D transformation and should lead to effective learning. This study was conducted using a computer graphical model of the human brain. There were two learning groups: Whole then Sections, and Integrated 2D3D. Both groups learned whole anatomy (3D neuroanatomy) before learning sectional anatomy (2D neuroanatomy). The Whole then Sections group then learned sectional anatomy using 2D representations only. The Integrated 2D3D group learned sectional anatomy from a graphically integrated 3D and 2D model. A set of tests for generalization of knowledge to interpreting biomedical images was conducted immediately after learning was completed. The order of presentation of the tests of generalization of knowledge was counterbalanced across participants to explore a secondary hypothesis of the study: preparation for future learning. If the computer-based instruction programs used in this study are effective tools for teaching anatomy, the participants should continue learning neuroanatomy with exposure to new representations. A test of long-term retention of sectional anatomy was conducted 4-8 weeks after learning was completed. The Integrated 2D3D group was better than the Whole then Sections group in retaining knowledge of difficult instances of sectional anatomy after the retention interval. The benefit

  19. Web based 3-D medical image visualization on the PC.

    Science.gov (United States)

    Kim, N; Lee, D H; Kim, J H; Kim, Y; Cho, H J

    1998-01-01

    With the recent advance of Web and its associated technologies, information sharing on distribute computing environments has gained a great amount of attention from many researchers in many application areas, such as medicine, engineering, and business. One basic requirement of distributed medical consultation systems is that geographically dispersed, disparate participants are allowed to exchange information readily with each other. Such software also needs to be supported on a broad range of computer platforms to increase the softwares accessibility. In this paper, the development of world-wide-web based medical consultation system for radiology imaging is addressed to provide platform independence and greater accessibility. The system supports sharing of 3-dimensional objects. We use VRML (Virtual Reality Modeling Language), which is the defacto standard in 3-D modeling on the Web. 3-D objects are reconstructed from CT or MRI volume data using a VRML format, which can be viewed and manipulated easily in Web-browsers with a VRML plug-in. A Marching cubes method is used in the transformation of scanned volume data sets to polygonal surfaces of VRML. A decimation algorithm is adopted to reduce the number of meshes in the resulting VRML file. 3-D volume data are often very large in size, hence loading the data on PC level computers requires a significant reduction of the size of the data, while minimizing the loss of the original shape information. This is also important to decrease network delays. A prototype system has been implemented (http://cybernet5.snu.ac.kr/-cyber/mrivrml .html), and several sessions of experiments are carried out.

  20. 3-D Image Analysis of Fluorescent Drug Binding

    Directory of Open Access Journals (Sweden)

    M. Raquel Miquel

    2005-01-01

    Full Text Available Fluorescent ligands provide the means of studying receptors in whole tissues using confocal laser scanning microscopy and have advantages over antibody- or non-fluorescence-based method. Confocal microscopy provides large volumes of images to be measured. Histogram analysis of 3-D image volumes is proposed as a method of graphically displaying large amounts of volumetric image data to be quickly analyzed and compared. The fluorescent ligand BODIPY FL-prazosin (QAPB was used in mouse aorta. Histogram analysis reports the amount of ligand-receptor binding under different conditions and the technique is sensitive enough to detect changes in receptor availability after antagonist incubation or genetic manipulations. QAPB binding was concentration dependent, causing concentration-related rightward shifts in the histogram. In the presence of 10 μM phenoxybenzamine (blocking agent, the QAPB (50 nM histogram overlaps the autofluorescence curve. The histogram obtained for the 1D knockout aorta lay to the left of that of control and 1B knockout aorta, indicating a reduction in 1D receptors. We have shown, for the first time, that it is possible to graphically display binding of a fluorescent drug to a biological tissue. Although our application is specific to adrenergic receptors, the general method could be applied to any volumetric, fluorescence-image-based assay.

  1. Myocardial strains from 3D displacement encoded magnetic resonance imaging

    International Nuclear Information System (INIS)

    Kindberg, Katarina; Haraldsson, Henrik; Sigfridsson, Andreas; Engvall, Jan; Ingels, Neil B Jr; Ebbers, Tino; Karlsson, Matts

    2012-01-01

    The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE), make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts

  2. Myocardial strains from 3D displacement encoded magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Kindberg Katarina

    2012-04-01

    Full Text Available Abstract Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE, make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts.

  3. Integrating 3D Visualization and GIS in Planning Education

    Science.gov (United States)

    Yin, Li

    2010-01-01

    Most GIS-related planning practices and education are currently limited to two-dimensional mapping and analysis although 3D GIS is a powerful tool to study the complex urban environment in its full spatial extent. This paper reviews current GIS and 3D visualization uses and development in planning practice and education. Current literature…

  4. 3D imaging of semiconductor components by discrete laminography

    Energy Technology Data Exchange (ETDEWEB)

    Batenburg, K. J. [Centrum Wiskunde and Informatica, P.O. Box 94079, NL-1090 GB Amsterdam, The Netherlands and iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, W. J.; Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  5. 3-D MR imaging of ectopia vasa deferentia

    Energy Technology Data Exchange (ETDEWEB)

    Goenka, Ajit Harishkumar; Parihar, Mohan; Sharma, Raju; Gupta, Arun Kumar [All India Institute of Medical Sciences (AIIMS), Department of Radiology, New Delhi (India); Bhatnagar, Veereshwar [All India Institute of Medical Sciences (AIIMS), Department of Paediatric Surgery, New Delhi (India)

    2009-11-15

    Ectopia vasa deferentia is a complex anomaly characterized by abnormal termination of the urethral end of the vas deferens into the urinary tract due to an incompletely understood developmental error of the distal Wolffian duct. Associated anomalies of the lower gastrointestinal tract and upper urinary tract are also commonly present due to closely related embryological development. Although around 32 cases have been reported in the literature, the MR appearance of this condition has not been previously described. We report a child with high anorectal malformation who was found to have ectopia vasa deferentia, crossed fused renal ectopia and type II caudal regression syndrome on MR examination. In addition to the salient features of this entity on reconstructed MR images, the important role of 3-D MRI in establishing an unequivocal diagnosis and its potential in facilitating individually tailored management is also highlighted. (orig.)

  6. A circular multifocal collimator for 3D SPECT imaging

    International Nuclear Information System (INIS)

    Guillemaud, R.; Grangeat, P.

    1993-01-01

    In order to improve sensitivity of 3D Single Photon Emission Tomography (SPECT) image, a cone-beam collimator can be used. A new circular multifocal collimator is proposed. The multiple focal points are distributed on a transaxial circle which is the trajectory of the focal points during the circular acquisition. This distribution provides a strong focusing at the center of the detector like a cone-beam collimator, with a good sensitivity, and a weak transaxial focusing at the periphery. A solution for an analytical multifocal reconstruction algorithm has been derived. Grangeat algorithm is proposed to use for this purpose in order to reconstruct with a good sensitivity the region of interest. (R.P.) 3 refs

  7. Automatic airline baggage counting using 3D image segmentation

    Science.gov (United States)

    Yin, Deyu; Gao, Qingji; Luo, Qijun

    2017-06-01

    The baggage number needs to be checked automatically during baggage self-check-in. A fast airline baggage counting method is proposed in this paper using image segmentation based on height map which is projected by scanned baggage 3D point cloud. There is height drop in actual edge of baggage so that it can be detected by the edge detection operator. And then closed edge chains are formed from edge lines that is linked by morphological processing. Finally, the number of connected regions segmented by closed chains is taken as the baggage number. Multi-bag experiment that is performed on the condition of different placement modes proves the validity of the method.

  8. 2D-3D image registration in diagnostic and interventional X-Ray imaging

    NARCIS (Netherlands)

    Bom, I.M.J. van der

    2010-01-01

    Clinical procedures that are conventionally guided by 2D x-ray imaging, may benefit from the additional spatial information provided by 3D image data. For instance, guidance of minimally invasive procedures with CT or MRI data provides 3D spatial information and visualization of structures that are

  9. Novel fully integrated computer system for custom footwear: from 3D digitization to manufacturing

    Science.gov (United States)

    Houle, Pascal-Simon; Beaulieu, Eric; Liu, Zhaoheng

    1998-03-01

    This paper presents a recently developed custom footwear system, which integrates 3D digitization technology, range image fusion techniques, a 3D graphical environment for corrective actions, parametric curved surface representation and computer numerical control (CNC) machining. In this system, a support designed with the help of biomechanics experts can stabilize the foot in a correct and neutral position. The foot surface is then captured by a 3D camera using active ranging techniques. A software using a library of documented foot pathologies suggests corrective actions on the orthosis. Three kinds of deformations can be achieved. The first method uses previously scanned pad surfaces by our 3D scanner, which can be easily mapped onto the foot surface to locally modify the surface shape. The second kind of deformation is construction of B-Spline surfaces by manipulating control points and modifying knot vectors in a 3D graphical environment to build desired deformation. The last one is a manual electronic 3D pen, which may be of different shapes and sizes, and has an adjustable 'pressure' information. All applied deformations should respect a G1 surface continuity, which ensure that the surface can accustom a foot. Once the surface modification process is completed, the resulting data is sent to manufacturing software for CNC machining.

  10. GPU-accelerated denoising of 3D magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  11. 3D Power Line Extraction from Multiple Aerial Images

    Directory of Open Access Journals (Sweden)

    Jaehong Oh

    2017-09-01

    Full Text Available Power lines are cables that carry electrical power from a power plant to an electrical substation. They must be connected between the tower structures in such a way that ensures minimum tension and sufficient clearance from the ground. Power lines can stretch and sag with the changing weather, eventually exceeding the planned tolerances. The excessive sags can then cause serious accidents, while hindering the durability of the power lines. We used photogrammetric techniques with a low-cost drone to achieve efficient 3D mapping of power lines that are often difficult to approach. Unlike the conventional image-to-object space approach, we used the object-to-image space approach using cubic grid points. We processed four strips of aerial images to automatically extract the power line points in the object space. Experimental results showed that the approach could successfully extract the positions of the power line points for power line generation and sag measurement with the elevation accuracy of a few centimeters.

  12. 3-D interactive visualisation tools for Hi spectral line imaging

    NARCIS (Netherlands)

    van der Hulst, J. M.; Punzo, D.; Roerdink, J. B. T. M.

    2016-01-01

    Upcoming HI surveys will deliver such large datasets that automated processing using the full 3-D information to find and characterize HI objects is unavoidable. Full 3-D visualization is an essential tool for enabling qualitative and quantitative inspection and analysis of the 3-D data, which is

  13. High resolution 3D imaging of synchrotron generated microbeams

    Energy Technology Data Exchange (ETDEWEB)

    Gagliardi, Frank M., E-mail: frank.gagliardi@wbrc.org.au [Alfred Health Radiation Oncology, The Alfred, Melbourne, Victoria 3004, Australia and School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia); Cornelius, Iwan [Imaging and Medical Beamline, Australian Synchrotron, Clayton, Victoria 3168, Australia and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales 2500 (Australia); Blencowe, Anton [Division of Health Sciences, School of Pharmacy and Medical Sciences, The University of South Australia, Adelaide, South Australia 5000, Australia and Division of Information Technology, Engineering and the Environment, Mawson Institute, University of South Australia, Mawson Lakes, South Australia 5095 (Australia); Franich, Rick D. [School of Applied Sciences and Health Innovations Research Institute, RMIT University, Melbourne, Victoria 3000 (Australia); Geso, Moshi [School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia)

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  14. High resolution 3D imaging of synchrotron generated microbeams

    International Nuclear Information System (INIS)

    Gagliardi, Frank M.; Cornelius, Iwan; Blencowe, Anton; Franich, Rick D.; Geso, Moshi

    2015-01-01

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery

  15. Automated 3D-Objectdocumentation on the Base of an Image Set

    Directory of Open Access Journals (Sweden)

    Sebastian Vetter

    2011-12-01

    Full Text Available Digital stereo-photogrammetry allows users an automatic evaluation of the spatial dimension and the surface texture of objects. The integration of image analysis techniques simplifies the automation of evaluation of large image sets and offers a high accuracy [1]. Due to the substantial similarities of stereoscopic image pairs, correlation techniques provide measurements of subpixel precision for corresponding image points. With the help of an automated point search algorithm in image sets identical points are used to associate pairs of images to stereo models and group them. The found identical points in all images are basis for calculation of the relative orientation of each stereo model as well as defining the relation of neighboured stereo models. By using proper filter strategies incorrect points are removed and the relative orientation of the stereo model can be made automatically. With the help of 3D-reference points or distances at the object or a defined distance of camera basis the stereo model is orientated absolute. An adapted expansion- and matching algorithm offers the possibility to scan the object surface automatically. The result is a three dimensional point cloud; the scan resolution depends on image quality. With the integration of the iterative closest point- algorithm (ICP these partial point clouds are fitted to a total point cloud. In this way, 3D-reference points are not necessary. With the help of the implemented triangulation algorithm a digital surface models (DSM can be created. The texturing can be made automatically by the usage of the images that were used for scanning the object surface. It is possible to texture the surface model directly or to generate orthophotos automatically. By using of calibrated digital SLR cameras with full frame sensor a high accuracy can be reached. A big advantage is the possibility to control the accuracy and quality of the 3d-objectdocumentation with the resolution of the images. The

  16. An Algorithm for Fast Computation of 3D Zernike Moments for Volumetric Images

    Directory of Open Access Journals (Sweden)

    Khalid M. Hosny

    2012-01-01

    Full Text Available An algorithm was proposed for very fast and low-complexity computation of three-dimensional Zernike moments. The 3D Zernike moments were expressed in terms of exact 3D geometric moments where the later are computed exactly through the mathematical integration of the monomial terms over the digital image/object voxels. A new symmetry-based method was proposed to compute 3D Zernike moments with 87% reduction in the computational complexity. A fast 1D cascade algorithm was also employed to add more complexity reduction. The comparison with existing methods was performed, where the numerical experiments and the complexity analysis ensured the efficiency of the proposed method especially with image and objects of large sizes.

  17. CT-image based conformal brachytherapy of breast cancer. The significance of semi-3-D and 3-D treatment planning

    International Nuclear Information System (INIS)

    Polgar, C.; Major, T.; Somogyi, A.; Takacsi-Nagy, Z.; Mangel, L.C.; Fodor, J.; Nemeth, G.; Forrai, G.; Sulyok, Z.

    2000-01-01

    In 103 patients with T1-2, N0-1 breast cancer the tumor bed was clipped during breast conserving surgery. Fifty-two of them received boost brachytherapy after 46 to 50 Gy teletherapy and 51 patients were treated with brachytherapy alone via flexible implant tubes. Single double and triple plane implant was used in 6,89 and 8 cases, respectively. The dose of boost brachytherapy and sole brachytherapy prescribed to dose reference points was 3 times 4.75 Gy and 7 times 5.2 Gy, respectively. The positions of dose reference points varied according to the level (2-D, semi-3-D and 3-D) of treatment planning performed. The treatment planning was based on the 3-D reconstruction of the surgical clips, implant tubes and skin points. In all cases the implantations were planned with a semi-3-D technique aided by simulator. In 10 cases a recently developed CT-guided 3-D planning system was used. The semi-3D and 3-D treatment plans were compared to hypothetical 2-D plans using dose-volume histograms and dose non-uniformity ratios. The values of mean central dose, mean skin dose, minimal clip dose, proportion of underdosaged clips and mean target surface dose were evaluated. The accuracy of tumor bed localization and the conformity of planning target volume and treated volume were also analyzed in each technique. Results: With the help of conformal semi-3D and 3D brachytherapy planning we could define reference dose points, active source positions and dwell times individually. This technique decreased the mean skin dose with 22.2% and reduced the possibility of geographical miss. We could achieve the best conformity between the planning target volume and the treated volume with the CT-image based 3-D treatment planning, at the cost of worse dose homogeneity. The mean treated volume was reduced by 25.1% with semi-3-D planning, however, its was increased by 16.2% with 3-D planning, compared to the 2-D planning. (orig.) [de

  18. Silhouette-based approach of 3D image reconstruction for automated image acquisition using robotic arm

    Science.gov (United States)

    Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.

    2017-06-01

    This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.

  19. 3D Navigation and Integrated Hazard Display in Advanced Avionics: Workload, Performance, and Situation Awareness

    Science.gov (United States)

    Wickens, Christopher D.; Alexander, Amy L.

    2004-01-01

    We examined the ability for pilots to estimate traffic location in an Integrated Hazard Display, and how such estimations should be measured. Twelve pilots viewed static images of traffic scenarios and then estimated the outside world locations of queried traffic represented in one of three display types (2D coplanar, 3D exocentric, and split-screen) and in one of four conditions (display present/blank crossed with outside world present/blank). Overall, the 2D coplanar display best supported both vertical (compared to 3D) and lateral (compared to split-screen) traffic position estimation performance. Costs of the 3D display were associated with perceptual ambiguity. Costs of the split screen display were inferred to result from inappropriate attention allocation. Furthermore, although pilots were faster in estimating traffic locations when relying on memory, accuracy was greatest when the display was available.

  20. 3D Seismic Imaging over a Potential Collapse Structure

    Science.gov (United States)

    Gritto, Roland; O'Connell, Daniel; Elobaid Elnaiem, Ali; Mohamed, Fathelrahman; Sadooni, Fadhil

    2016-04-01

    The Middle-East has seen a recent boom in construction including the planning and development of complete new sub-sections of metropolitan areas. Before planning and construction can commence, however, the development areas need to be investigated to determine their suitability for the planned project. Subsurface parameters such as the type of material (soil/rock), thickness of top soil or rock layers, depth and elastic parameters of basement, for example, comprise important information needed before a decision concerning the suitability of the site for construction can be made. A similar problem arises in environmental impact studies, when subsurface parameters are needed to assess the geological heterogeneity of the subsurface. Environmental impact studies are typically required for each construction project, particularly for the scale of the aforementioned building boom in the Middle East. The current study was conducted in Qatar at the location of a future highway interchange to evaluate a suite of 3D seismic techniques in their effectiveness to interrogate the subsurface for the presence of karst-like collapse structures. The survey comprised an area of approximately 10,000 m2 and consisted of 550 source- and 192 receiver locations. The seismic source was an accelerated weight drop while the geophones consisted of 3-component 10 Hz velocity sensors. At present, we analyzed over 100,000 P-wave phase arrivals and performed high-resolution 3-D tomographic imaging of the shallow subsurface. Furthermore, dispersion analysis of recorded surface waves will be performed to obtain S-wave velocity profiles of the subsurface. Both results, in conjunction with density estimates, will be utilized to determine the elastic moduli of the subsurface rock layers.

  1. Enhanced 3D fluorescence live cell imaging on nanoplasmonic substrate

    International Nuclear Information System (INIS)

    Gartia, Manas Ranjan; Hsiao, Austin; Logan Liu, G; Sivaguru, Mayandi; Chen Yi

    2011-01-01

    We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.

  2. Enhanced 3D fluorescence live cell imaging on nanoplasmonic substrate

    Energy Technology Data Exchange (ETDEWEB)

    Gartia, Manas Ranjan [Department of Nuclear, Plasma and Radiological Engineering, University of Illinois, Urbana, IL 61801 (United States); Hsiao, Austin; Logan Liu, G [Department of Bioengineering, University of Illinois, Urbana, IL 61801 (United States); Sivaguru, Mayandi [Institute for Genomic Biology, University of Illinois, Urbana, IL 61801 (United States); Chen Yi, E-mail: loganliu@illinois.edu [Department of Electrical and Computer Engineering, University of Illinois, Urbana, IL 61801 (United States)

    2011-09-07

    We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.

  3. 3-D Imaging by Laser Radar and Applications in Preventing and Combating Crime and Terrorism

    National Research Council Canada - National Science Library

    Letalick, Dietmar; Ahlberg, Joergen; Andersson, Pierre; Chevalier, Tomas; Groenwall, Christina; Larsson, Hakan; Persson, Asa; Klasen, Lena

    2004-01-01

    This paper describes the ongoing research on 3-dimensional (3-D) imaging at FOI. Specifically, we address the new possibilities brought by laser radars, focusing on systems for high resolution 3-D imaging...

  4. Preparing diagnostic 3D images for image registration with planning CT images

    International Nuclear Information System (INIS)

    Tracton, Gregg S.; Miller, Elizabeth P.; Rosenman, Julian; Chang, Sha X.; Sailer, Scott; Boxwala, Azaz; Chaney, Edward L.

    1997-01-01

    Purpose: Pre-radiotherapy (pre-RT) tomographic images acquired for diagnostic purposes often contain important tumor and/or normal tissue information which is poorly defined or absent in planning CT images. Our two years of clinical experience has shown that computer-assisted 3D registration of pre-RT images with planning CT images often plays an indispensable role in accurate treatment volume definition. Often the only available format of the diagnostic images is film from which the original 3D digital data must be reconstructed. In addition, any digital data, whether reconstructed or not, must be put into a form suitable for incorporation into the treatment planning system. The purpose of this investigation was to identify all problems that must be overcome before this data is suitable for clinical use. Materials and Methods: In the past two years we have 3D-reconstructed 300 diagnostic images from film and digital sources. As a problem was discovered we built a software tool to correct it. In time we collected a large set of such tools and found that they must be applied in a specific order to achieve the correct reconstruction. Finally, a toolkit (ediScan) was built that made all these tools available in the proper manner via a pleasant yet efficient mouse-based user interface. Results: Problems we discovered included different magnifications, shifted display centers, non-parallel image planes, image planes not perpendicular to the long axis of the table-top (shearing), irregularly spaced scans, non contiguous scan volumes, multiple slices per film, different orientations for slice axes (e.g. left-right reversal), slices printed at window settings corresponding to tissues of interest for diagnostic purposes, and printing artifacts. We have learned that the specific steps to correct these problems, in order of application, are: Also, we found that fast feedback and large image capacity (at least 2000 x 2000 12-bit pixels) are essential for practical application

  5. Optimized 3D Street Scene Reconstruction from Driving Recorder Images

    Directory of Open Access Journals (Sweden)

    Yongjun Zhang

    2015-07-01

    Full Text Available The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building.

  6. Complex adaptation-based LDR image rendering for 3D image reconstruction

    Science.gov (United States)

    Lee, Sung-Hak; Kwon, Hyuk-Ju; Sohng, Kyu-Ik

    2014-07-01

    A low-dynamic tone-compression technique is developed for realistic image rendering that can make three-dimensional (3D) images similar to realistic scenes by overcoming brightness dimming in the 3D display mode. The 3D surround provides varying conditions for image quality, illuminant adaptation, contrast, gamma, color, sharpness, and so on. In general, gain/offset adjustment, gamma compensation, and histogram equalization have performed well in contrast compression; however, as a result of signal saturation and clipping effects, image details are removed and information is lost on bright and dark areas. Thus, an enhanced image mapping technique is proposed based on space-varying image compression. The performance of contrast compression is enhanced with complex adaptation in a 3D viewing surround combining global and local adaptation. Evaluating local image rendering in view of tone and color expression, noise reduction, and edge compensation confirms that the proposed 3D image-mapping model can compensate for the loss of image quality in the 3D mode.

  7. CT-image based conformal brachytherapy of breast cancer. The significance of semi-3-D and 3-D treatment planning

    Energy Technology Data Exchange (ETDEWEB)

    Polgar, C.; Major, T.; Somogyi, A.; Takacsi-Nagy, Z.; Mangel, L.C.; Fodor, J.; Nemeth, G. [Orszagos Onkologiai Intezet, Budapest (Hungary). Dept. of Radiotherapy; Forrai, G. [Haynal Imre Univ. of Health Sciences, Budapest (Hungary). Dept. of Radiology; Sulyok, Z. [Orszagos Onkologiai Intezet, Budapest (Hungary). Dept. of Surgery

    2000-03-01

    In 103 patients with T1-2, N0-1 breast cancer the tumor bed was clipped during breast conserving surgery. Fifty-two of them received boost brachytherapy after 46 to 50 Gy teletherapy and 51 patients were treated with brachytherapy alone via flexible implant tubes. Single double and triple plane implant was used in 6,89 and 8 cases, respectively. The dose of boost brachytherapy and sole brachytherapy prescribed to dose reference points was 3 times 4.75 Gy and 7 times 5.2 Gy, respectively. The positions of dose reference points varied according to the level (2-D, semi-3-D and 3-D) of treatment planning performed. The treatment planning was based on the 3-D reconstruction of the surgical clips, implant tubes and skin points. In all cases the implantations were planned with a semi-3-D technique aided by simulator. In 10 cases a recently developed CT-guided 3-D planning system was used. The semi-3D and 3-D treatment plans were compared to hypothetical 2-D plans using dose-volume histograms and dose non-uniformity ratios. The values of mean central dose, mean skin dose, minimal clip dose, proportion of underdosaged clips and mean target surface dose were evaluated. The accuracy of tumor bed localization and the conformity of planning target volume and treated volume were also analyzed in each technique. Results: With the help of conformal semi-3D and 3D brachytherapy planning we could define reference dose points, active source positions and dwell times individually. This technique decreased the mean skin dose with 22.2% and reduced the possibility of geographical miss. We could achieve the best conformity between the planning target volume and the treated volume with the CT-image based 3-D treatment planning, at the cost of worse dose homogeneity. The mean treated volume was reduced by 25.1% with semi-3-D planning, however, its was increased by 16.2% with 3-D planning, compared to the 2-D planning. (orig.) [German] Bei 103 Patientinnen mit Mammakarzinom der Stadien T1

  8. Femtosecond Laser Direct Write Integration of Multi-Protein Patterns and 3D Microstructures into 3D Glass Microfluidic Devices

    Directory of Open Access Journals (Sweden)

    Daniela Serien

    2018-01-01

    Full Text Available Microfluidic devices and biochips offer miniaturized laboratories for the separation, reaction, and analysis of biochemical materials with high sensitivity and low reagent consumption. The integration of functional or biomimetic elements further functionalizes microfluidic devices for more complex biological studies. The recently proposed ship-in-a-bottle integration based on laser direct writing allows the construction of microcomponents made of photosensitive polymer inside closed microfluidic structures. Here, we expand this technology to integrate proteinaceous two-dimensional (2D and three-dimensional (3D microstructures with the aid of photo-induced cross-linking into glass microchannels. The concept is demonstrated with bovine serum albumin and enhanced green fluorescent protein, each mixed with photoinitiator (Sodium 4-[2-(4-Morpholino benzoyl-2-dimethylamino] butylbenzenesulfonate. Unlike the polymer integration, fabrication over the entire channel cross-section is challenging. Two proteins are integrated into the same channel to demonstrate multi-protein patterning. Using 50% w/w glycerol solvent instead of 100% water achieves almost the same fabrication resolution for in-channel fabrication as on-surface fabrication due to the improved refractive index matching, enabling the fabrication of 3D microstructures. A glycerol-water solvent also reduces the risk of drying samples. We believe this technology can integrate diverse proteins to contribute to the versatility of microfluidics.

  9. 3D imaging of nanomaterials by discrete tomography.

    Science.gov (United States)

    Batenburg, K J; Bals, S; Sijbers, J; Kübel, C; Midgley, P A; Hernandez, J C; Kaiser, U; Encina, E R; Coronado, E A; Van Tendeloo, G

    2009-05-01

    The field of discrete tomography focuses on the reconstruction of samples that consist of only a few different materials. Ideally, a three-dimensional (3D) reconstruction of such a sample should contain only one grey level for each of the compositions in the sample. By exploiting this property in the reconstruction algorithm, either the quality of the reconstruction can be improved significantly, or the number of required projection images can be reduced. The discrete reconstruction typically contains fewer artifacts and does not have to be segmented, as it already contains one grey level for each composition. Recently, a new algorithm, called discrete algebraic reconstruction technique (DART), has been proposed that can be used effectively on experimental electron tomography datasets. In this paper, we propose discrete tomography as a general reconstruction method for electron tomography in materials science. We describe the basic principles of DART and show that it can be applied successfully to three different types of samples, consisting of embedded ErSi(2) nanocrystals, a carbon nanotube grown from a catalyst particle and a single gold nanoparticle, respectively.

  10. 3D imaging of nanomaterials by discrete tomography

    International Nuclear Information System (INIS)

    Batenburg, K.J.; Bals, S.; Sijbers, J.; Kuebel, C.; Midgley, P.A.; Hernandez, J.C.; Kaiser, U.; Encina, E.R.; Coronado, E.A.; Van Tendeloo, G.

    2009-01-01

    The field of discrete tomography focuses on the reconstruction of samples that consist of only a few different materials. Ideally, a three-dimensional (3D) reconstruction of such a sample should contain only one grey level for each of the compositions in the sample. By exploiting this property in the reconstruction algorithm, either the quality of the reconstruction can be improved significantly, or the number of required projection images can be reduced. The discrete reconstruction typically contains fewer artifacts and does not have to be segmented, as it already contains one grey level for each composition. Recently, a new algorithm, called discrete algebraic reconstruction technique (DART), has been proposed that can be used effectively on experimental electron tomography datasets. In this paper, we propose discrete tomography as a general reconstruction method for electron tomography in materials science. We describe the basic principles of DART and show that it can be applied successfully to three different types of samples, consisting of embedded ErSi 2 nanocrystals, a carbon nanotube grown from a catalyst particle and a single gold nanoparticle, respectively.

  11. Model-based normalization for iterative 3D PET image

    International Nuclear Information System (INIS)

    Bai, B.; Li, Q.; Asma, E.; Leahy, R.M.; Holdsworth, C.H.; Chatziioannou, A.; Tai, Y.C.

    2002-01-01

    We describe a method for normalization in 3D PET for use with maximum a posteriori (MAP) or other iterative model-based image reconstruction methods. This approach is an extension of previous factored normalization methods in which we include separate factors for detector sensitivity, geometric response, block effects and deadtime. Since our MAP reconstruction approach already models some of the geometric factors in the forward projection, the normalization factors must be modified to account only for effects not already included in the model. We describe a maximum likelihood approach to joint estimation of the count-rate independent normalization factors, which we apply to data from a uniform cylindrical source. We then compute block-wise and block-profile deadtime correction factors using singles and coincidence data, respectively, from a multiframe cylindrical source. We have applied this method for reconstruction of data from the Concorde microPET P4 scanner. Quantitative evaluation of this method using well-counter measurements of activity in a multicompartment phantom compares favourably with normalization based directly on cylindrical source measurements. (author)

  12. Orthodontic treatment plan changed by 3D images

    International Nuclear Information System (INIS)

    Yordanova, G.; Stanimirov, P.

    2014-01-01

    Clinical application of CBCT is most often enforced in dental phenomenon of impacted teeth, hyperodontia, transposition, ankyloses or root resorption and other pathologies in the maxillofacial area. The goal, we put ourselves, is to show how the information from 3D images changes the protocol of the orthodontic treatment. The material, we presented six our clinical cases and the change in the plan of the treatment, which has used after analyzing the information carried on the three planes of CBCT. These cases are casuistic in the orthodontic practice and require individual approach to each of them during their analysis and decision taken. The discussion made by us is in line with reveal of the impacted teeth, where we need to evaluate their vertical depth and mediodistal ratios with the bond structures. At patients with hyperodontia, the assessment is of outmost importance to decide which of the teeth to be extracted and which one to be arranged into the dental arch. The conclusion we make is that diagnostic information is essential for decisions about treatment plan. The exact graphs will lead to better treatment plan and more predictable results. (authors) Key words: CBCT. IMPACTED CANINES. HYPERODONTIA. TRANSPOSITION

  13. Image-Based 3d Reconstruction and Analysis for Orthodontia

    Science.gov (United States)

    Knyaz, V. A.

    2012-08-01

    Among the main tasks of orthodontia are analysis of teeth arches and treatment planning for providing correct position for every tooth. The treatment plan is based on measurement of teeth parameters and designing perfect teeth arch curve which teeth are to create after treatment. The most common technique for teeth moving uses standard brackets which put on teeth and a wire of given shape which is clamped by these brackets for producing necessary forces to every tooth for moving it in given direction. The disadvantages of standard bracket technique are low accuracy of tooth dimensions measurements and problems with applying standard approach for wide variety of complex orthodontic cases. The image-based technique for orthodontic planning, treatment and documenting aimed at overcoming these disadvantages is proposed. The proposed approach provides performing accurate measurements of teeth parameters needed for adequate planning, designing correct teeth position and monitoring treatment process. The developed technique applies photogrammetric means for teeth arch 3D model generation, brackets position determination and teeth shifting analysis.

  14. Perceptual attributes of crosstalk in 3D images

    NARCIS (Netherlands)

    Seuntiëns, P.J.H.; Meesters, L.M.J.; IJsselsteijn, W.A.

    2005-01-01

    Nowadays, crosstalk is probably one of the most annoying distortions in 3D displays. So far, display designers still have a relative lack of knowledge about the relevant subjective attributes of crosstalk and how they are combined in an overall 3D viewing experience model. The aim of the current

  15. Deformable M-Reps for 3D Medical Image Segmentation

    Science.gov (United States)

    Pizer, Stephen M.; Fletcher, P. Thomas; Joshi, Sarang; Thall, Andrew; Chen, James Z.; Fridman, Yonatan; Fritsch, Daniel S.; Gash, Graham; Glotzer, John M.; Jiroutek, Michael R.; Lu, Conglin; Muller, Keith E.; Tracton, Gregg; Yushkevich, Paul; Chaney, Edward L.

    2013-01-01

    M-reps (formerly called DSLs) are a multiscale medial means for modeling and rendering 3D solid geometry. They are particularly well suited to model anatomic objects and in particular to capture prior geometric information effectively in deformable models segmentation approaches. The representation is based on figural models, which define objects at coarse scale by a hierarchy of figures – each figure generally a slab representing a solid region and its boundary simultaneously. This paper focuses on the use of single figure models to segment objects of relatively simple structure. A single figure is a sheet of medial atoms, which is interpolated from the model formed by a net, i.e., a mesh or chain, of medial atoms (hence the name m-reps), each atom modeling a solid region via not only a position and a width but also a local figural frame giving figural directions and an object angle between opposing, corresponding positions on the boundary implied by the m-rep. The special capability of an m-rep is to provide spatial and orientational correspondence between an object in two different states of deformation. This ability is central to effective measurement of both geometric typicality and geometry to image match, the two terms of the objective function optimized in segmentation by deformable models. The other ability of m-reps central to effective segmentation is their ability to support segmentation at multiple levels of scale, with successively finer precision. Objects modeled by single figures are segmented first by a similarity transform augmented by object elongation, then by adjustment of each medial atom, and finally by displacing a dense sampling of the m-rep implied boundary. While these models and approaches also exist in 2D, we focus on 3D objects. The segmentation of the kidney from CT and the hippocampus from MRI serve as the major examples in this paper. The accuracy of segmentation as compared to manual, slice-by-slice segmentation is reported. PMID

  16. Embedded sensing: integrating sensors in 3-D printed structures

    Directory of Open Access Journals (Sweden)

    A. Dijkshoorn

    2018-03-01

    Full Text Available Current additive manufacturing allows for the implementation of electrically interrogated 3-D printed sensors. In this contribution various technologies, sensing principles and applications are discussed. We will give both an overview of some of the sensors presented in literature as well as some of our own recent work on 3-D printed sensors. The 3-D printing methods discussed include fused deposition modelling (FDM, using multi-material printing and poly-jetting. Materials discussed are mainly thermoplastics and include thermoplastic polyurethane (TPU, both un-doped as well as doped with carbon black, polylactic acid (PLA and conductive inks. The sensors discussed are based on biopotential sensing, capacitive sensing and resistive sensing with applications in surface electromyography (sEMG and mechanical and tactile sensing. As these sensors are based on plastics they are in general flexible and therefore open new possibilities for sensing in soft structures, e.g. as used in soft robotics. At the same time they show many of the characteristics of plastics like hysteresis, drift and non-linearity. We will argue that 3-D printing of embedded sensors opens up exciting new possibilities but also that these sensors require us to rethink how to exploit non-ideal sensors.

  17. Embedded sensing : Integrating sensors in 3-D printed structures

    NARCIS (Netherlands)

    Dijkshoorn, Alexander; Werkman, Patrick; Welleweerd, Marcel; Wolterink, Gerhard Jan Willem; Eijking, Bram; Delamare, John; Sanders, Remco; Krijnen, Gijs J.M.

    2018-01-01

    Current additive manufacturing allows for the implementation of electrically interrogated 3-D printed sensors. In this contribution various technologies, sensing principles and applications are discussed. We will give both an overview of some of the sensors presented in literature as well as some of

  18. INTEGRATED SFM TECHNIQUES USING DATA SET FROM GOOGLE EARTH 3D MODEL AND FROM STREET LEVEL

    Directory of Open Access Journals (Sweden)

    L. Inzerillo

    2017-08-01

    Full Text Available Structure from motion (SfM represents a widespread photogrammetric method that uses the photogrammetric rules to carry out a 3D model from a photo data set collection. Some complex ancient buildings, such as Cathedrals, or Theatres, or Castles, etc. need to implement the data set (realized from street level with the UAV one in order to have the 3D roof reconstruction. Nevertheless, the use of UAV is strong limited from the government rules. In these last years, Google Earth (GE has been enriched with the 3D models of the earth sites. For this reason, it seemed convenient to start to test the potentiality offered by GE in order to extract from it a data set that replace the UAV function, to close the aerial building data set, using screen images of high resolution 3D models. Users can take unlimited “aerial photos” of a scene while flying around in GE at any viewing angle and altitude. The challenge is to verify the metric reliability of the SfM model carried out with an integrated data set (the one from street level and the one from GE aimed at replace the UAV use in urban contest. This model is called integrated GE SfM model (i-GESfM. In this paper will be present a case study: the Cathedral of Palermo.

  19. Lensfree diffractive tomography for the imaging of 3D cell cultures

    Science.gov (United States)

    Berdeu, Anthony; Momey, Fabien; Dinten, Jean-Marc; Gidrol, Xavier; Picollet-D'hahan, Nathalie; Allier, Cédric

    2017-02-01

    New microscopes are needed to help reaching the full potential of 3D organoid culture studies by gathering large quantitative and systematic data over extended periods of time while preserving the integrity of the living sample. In order to reconstruct large volumes while preserving the ability to catch every single cell, we propose new imaging platforms based on lens-free microscopy, a technic which is addressing these needs in the context of 2D cell culture, providing label-free and non-phototoxic acquisition of large datasets. We built lens-free diffractive tomography setups performing multi-angle acquisitions of 3D organoid cultures embedded in Matrigel and developed dedicated 3D holographic reconstruction algorithms based on the Fourier diffraction theorem. Nonetheless, holographic setups do not record the phase of the incident wave front and the biological samples in Petri dish strongly limit the angular coverage. These limitations introduce numerous artefacts in the sample reconstruction. We developed several methods to overcome them, such as multi-wavelength imaging or iterative phase retrieval. The most promising technic currently developed is based on a regularised inverse problem approach directly applied on the 3D volume to reconstruct. 3D reconstructions were performed on several complex samples such as 3D networks or spheroids embedded in capsules with large reconstructed volumes up to 25 mm3 while still being able to identify single cells. To our knowledge, this is the first time that such an inverse problem approach is implemented in the context of lens-free diffractive tomography enabling to reconstruct large fully 3D volumes of unstained biological samples.

  20. Rainbow Particle Imaging Velocimetry for Dense 3D Fluid Velocity Imaging

    KAUST Repository

    Xiong, Jinhui

    2017-04-11

    Despite significant recent progress, dense, time-resolved imaging of complex, non-stationary 3D flow velocities remains an elusive goal. In this work we tackle this problem by extending an established 2D method, Particle Imaging Velocimetry, to three dimensions by encoding depth into color. The encoding is achieved by illuminating the flow volume with a continuum of light planes (a “rainbow”), such that each depth corresponds to a specific wavelength of light. A diffractive component in the camera optics ensures that all planes are in focus simultaneously. For reconstruction, we derive an image formation model for recovering stationary 3D particle positions. 3D velocity estimation is achieved with a variant of 3D optical flow that accounts for both physical constraints as well as the rainbow image formation model. We evaluate our method with both simulations and an experimental prototype setup.

  1. Creation of computerized 3D MRI-integrated atlases of the human basal ganglia and thalamus

    Directory of Open Access Journals (Sweden)

    Abbas F. Sadikot

    2011-09-01

    Full Text Available Functional brain imaging and neurosurgery in subcortical areas often requires visualization of brain nuclei beyond the resolution of current Magnetic Resonance Imaging (MRI methods. We present techniques used to create: 1 a lower resolution 3D atlas, based on the Schaltenbrand and Wahren print atlas, which was integrated into a stereotactic neurosurgery planning and visualization platform (VIPER; and 2 a higher resolution 3D atlas derived from a single set of manually segmented histological slices containing nuclei of the basal ganglia, thalamus, basal forebrain and medial temporal lobe. Both atlases were integrated to a canonical MRI (Colin27 from a young male participant by manually identifying homologous landmarks. The lower resolution atlas was then warped to fit the MRI based on the identified landmarks. A pseudo-MRI representation of the high-resolution atlas was created, and a nonlinear transformation was calculated in order to match the atlas to the template MRI. The atlas can then be warped to match the anatomy of Parkinson’s disease surgical candidates by using 3D automated nonlinear deformation methods. By way of functional validation of the atlas, the location of the sensory thalamus was correlated with stereotactic intraoperative physiological data. The position of subthalamic electrode positions in patients with Parkinson’s disease was also evaluated in the atlas-integrated MRI space. Finally, probabilistic maps of subthalamic stimulation electrodes were developed, in order to allow group analysis of the location of contacts associated with the best motor outcomes. We have therefore developed, and are continuing to validate, a high-resolution computerized MRI-integrated 3D histological atlas, which is useful in functional neurosurgery, and for functional and anatomical studies of the human basal ganglia, thalamus and basal forebrain.

  2. 3D visualization of integrated ground penetrating radar data and EM-61 data to determine buried objects and their characteristics

    International Nuclear Information System (INIS)

    Kadioğlu, Selma; Daniels, Jeffrey J

    2008-01-01

    This paper is based on an interactive three-dimensional (3D) visualization of two-dimensional (2D) ground penetrating radar (GPR) data and their integration with electromagnetic induction (EMI) using EM-61 data in a 3D volume. This method was used to locate and identify near-surface buried old industrial remains with shape, depth and type (metallic/non-metallic) in a brownfield site. The aim of the study is to illustrate a new approach to integrating two data sets in a 3D image for monitoring and interpretation of buried remains, and this paper methodically indicates the appropriate amplitude–colour and opacity function constructions to activate buried remains in a transparent 3D view. The results showed that the interactive interpretation of the integrated 3D visualization was done using generated transparent 3D sub-blocks of the GPR data set that highlighted individual anomalies in true locations. Colour assignments and formulating of opacity of the data sets were the keys to the integrated 3D visualization and interpretation. This new visualization provided an optimum visual comparison and an interpretation of the complex data sets to identify and differentiate the metallic and non-metallic remains and to control the true interpretation on exact locations with depth. Therefore, the integrated 3D visualization of two data sets allowed more successful identification of the buried remains

  3. Automated segmentation and geometrical modeling of the tricuspid aortic valve in 3D echocardiographic images.

    Science.gov (United States)

    Pouch, Alison M; Wang, Hongzhi; Takabe, Manabu; Jackson, Benjamin M; Sehgal, Chandra M; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2013-01-01

    The aortic valve has been described with variable anatomical definitions, and the consistency of 2D manual measurement of valve dimensions in medical image data has been questionable. Given the importance of image-based morphological assessment in the diagnosis and surgical treatment of aortic valve disease, there is considerable need to develop a standardized framework for 3D valve segmentation and shape representation. Towards this goal, this work integrates template-based medial modeling and multi-atlas label fusion techniques to automatically delineate and quantitatively describe aortic leaflet geometry in 3D echocardiographic (3DE) images, a challenging task that has been explored only to a limited extent. The method makes use of expert knowledge of aortic leaflet image appearance, generates segmentations with consistent topology, and establishes a shape-based coordinate system on the aortic leaflets that enables standardized automated measurements. In this study, the algorithm is evaluated on 11 3DE images of normal human aortic leaflets acquired at mid systole. The clinical relevance of the method is its ability to capture leaflet geometry in 3DE image data with minimal user interaction while producing consistent measurements of 3D aortic leaflet geometry.

  4. GammaModeler 3-D gamma-ray imaging technology

    International Nuclear Information System (INIS)

    2000-01-01

    The 3-D GammaModelertrademark system was used to survey a portion of the facility and provide 3-D visual and radiation representation of contaminated equipment located within the facility. The 3-D GammaModelertrademark system software was used to deconvolve extended sources into a series of point sources, locate the positions of these sources in space and calculate the 30 cm. dose rates for each of these sources. Localization of the sources in three dimensions provides information on source locations interior to the visual objects and provides a better estimate of the source intensities. The three dimensional representation of the objects can be made transparent in order to visualize sources located within the objects. Positional knowledge of all the sources can be used to calculate a map of the radiation in the canyon. The use of 3-D visual and gamma ray information supports improved planning decision-making, and aids in communications with regulators and stakeholders

  5. Image Reconstruction Based Modeling of 3D Textile Composite (Postprint)

    National Research Council Canada - National Science Library

    Zhou, Eric; Mollenhauer, David; Iarve, Endel

    2007-01-01

    ... joints, near-net shape processing, etc. To fully understand the mechanical behavior of 3-D textile composites, it is essential to perform analyses to predict effective material properties and damage initiation and growth...

  6. Integration of 3D geological modeling and gravity surveys for geothermal prospection in an Alpine region

    Science.gov (United States)

    Guglielmetti, L.; Comina, C.; Abdelfettah, Y.; Schill, E.; Mandrone, G.

    2013-11-01

    Thermal sources are common manifestations of geothermal energy resources in Alpine regions. The up-flow of the fluid is well-known to be often linked to cross-cutting fault zones providing a significant volume of fractures. Since conventional exploration methods are challenging in such areas of high topography and complicated logistics, 3D geological modeling based on structural investigation becomes a useful tool for assessing the overall geology of the investigated sites. Geological modeling alone is, however, less effective if not integrated with deep subsurface investigations that could provide a first order information on geological boundaries and an imaging of geological structures. With this aim, in the present paper the combined use of 3D geological modeling and gravity surveys for geothermal prospection of a hydrothermal area in the western Alps was carried out on two sites located in the Argentera Massif (NW Italy). The geothermal activity of the area is revealed by thermal anomalies with surface evidences, such as hot springs, at temperatures up to 70 °C. Integration of gravity measurements and 3D modeling investigates the potential of this approach in the context of geothermal exploration in Alpine regions where a very complex geological and structural setting is expected. The approach used in the present work is based on the comparison between the observed gravity and the gravity effect of the 3D geological models, in order to enhance local effects related to the geothermal system. It is shown that a correct integration of 3D modeling and detailed geophysical survey could allow a better characterization of geological structures involved in geothermal fluids circulation. Particularly, gravity inversions have successfully delineated the continuity in depth of low density structures, such as faults and fractured bands observed at the surface, and have been of great help in improving the overall geological model.

  7. 3-D Imaging Systems for Agricultural Applications—A Review

    Directory of Open Access Journals (Sweden)

    Manuel Vázquez-Arellano

    2016-04-01

    Full Text Available Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  8. An FPGA Implementation of a Robot Control System with an Integrated 3D Vision System

    Directory of Open Access Journals (Sweden)

    Yi-Ting Chen

    2015-05-01

    Full Text Available Robot decision making and motion control are commonly based on visual information in various applications. Position-based visual servo is a technique for vision-based robot control, which operates in the 3D workspace, uses real-time image processing to perform tasks of feature extraction, and returns the pose of the object for positioning control. In order to handle the computational burden at the vision sensor feedback, we design a FPGA-based motion-vision integrated system that employs dedicated hardware circuits for processing vision processing and motion control functions. This research conducts a preliminary study to explore the integration of 3D vision and robot motion control system design based on a single field programmable gate array (FPGA chip. The implemented motion-vision embedded system performs the following functions: filtering, image statistics, binary morphology, binary object analysis, object 3D position calculation, robot inverse kinematics, velocity profile generation, feedback counting, and multiple-axes position feedback control.

  9. The value of the 3D CT imaging in diagnosis of lumbar spondylolysis

    International Nuclear Information System (INIS)

    Krupski, W.; Paslawski, M.; Zlomaniec, J.; Fatyga, M.; Majcher, P.

    2003-01-01

    The frequent cause of a low back pain is the lumbar spondylolysis and the spondylolisthesis. The purpose of the study was to assess of the value of three-dimensional CT imaging in diagnosis of the lumbar spondylolysis. Material comprises of 22 patients complaining from low back pain in which lateral radiograms, axial CT scans, MPR and 3D reconstructions were performed. The presence of spondylolysis, spondylolisthesis, stenosis of the spinal canal and intervertebral foramens were assessed. The differences in diagnostic value between analysed imaging modalities in revealing spondylolysis, spondylolisthesis and narrowing of intervertebral foramens, were statistically highly significant. The highest sensitivity in recognition of these pathologies was observed in 3D reconstruction. The 3D reconstructions were also useful in an assessment of the spinal canal stenosis, revealing degenerative changes, but the increased number of diagnosed pathologies was not statistically significant comparing with axial CT section. Spondylolysis was diagnosed in 22 patients based on 3D reconstructions, in 14 patients on MPR reconstructions, in 18 patients on axial sections and only in 8 cases on lateral radiograms. Spondylolisthesis was visible on lateral radiograms in 21 patients, on axial scans in 12 patients, and in 22 cases, on both MPR and 3D reconstruction. The stenosis of the spinal canal was found on lateral radiograms in 2 patients, on MPR reconstruction in 4 cases, and in 7 patients on 3D reconstruction. The intervertebral foramen stenosis was present in 5 patients, based on MPR reconstruction and in 17, on spatial images. Spatial 3D CT reconstructions are superior to lateral radiograms, axial CT sections and MPR reconstruction in revealing spondylolysis, spondylolisthesis and stenosis of intervertebral foramens. They are useful in assessment of spinal canal narrowing and evaluation of degenerative changes. In our opinion 3D CT reconstruction projected from the inside of the

  10. A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images

    OpenAIRE

    Sturm , Peter; Maybank , Steve

    1999-01-01

    International audience; We present an approach for 3D reconstruction of objects from a single image. Obviously, constraints on the 3D structure are needed to perform this task. Our approach is based on user-provided coplanarity, perpendicularity and parallelism constraints. These are used to calibrate the image and perform 3D reconstruction. The method is described in detail and results are provided.

  11. A hyperspectral fluorescence system for 3D in vivo optical imaging

    International Nuclear Information System (INIS)

    Zavattini, Guido; Vecchi, Stefania; Mitchell, Gregory; Weisser, Ulli; Leahy, Richard M; Pichler, Bernd J; Smith, Desmond J; Cherry, Simon R

    2006-01-01

    In vivo optical instruments designed for small animal imaging generally measure the integrated light intensity across a broad band of wavelengths, or make measurements at a small number of selected wavelengths, and primarily use any spectral information to characterize and remove autofluorescence. We have developed a flexible hyperspectral imaging instrument to explore the use of spectral information to determine the 3D source location for in vivo fluorescence imaging applications. We hypothesize that the spectral distribution of the emitted fluorescence signal can be used to provide additional information to 3D reconstruction algorithms being developed for optical tomography. To test this hypothesis, we have designed and built an in vivo hyperspectral imaging system, which can acquire data from 400 to 1000 nm with 3 nm spectral resolution and which is flexible enough to allow the testing of a wide range of illumination and detection geometries. It also has the capability to generate a surface contour map of the animal for input into the reconstruction process. In this paper, we present the design of the system, demonstrate the depth dependence of the spectral signal in phantoms and show the ability to reconstruct 3D source locations using the spectral data in a simple phantom. We also characterize the basic performance of the imaging system

  12. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    Science.gov (United States)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  13. Multimodal Registration and Fusion for 3D Thermal Imaging

    Directory of Open Access Journals (Sweden)

    Moulay A. Akhloufi

    2015-01-01

    Full Text Available 3D vision is an area of computer vision that has attracted a lot of research interest and has been widely studied. In recent years we witness an increasing interest from the industrial community. This interest is driven by the recent advances in 3D technologies, which enable high precision measurements at an affordable cost. With 3D vision techniques we can conduct advanced manufactured parts inspections and metrology analysis. However, we are not able to detect subsurface defects. This kind of detection is achieved by other techniques, like infrared thermography. In this work, we present a new registration framework for 3D and thermal infrared multimodal fusion. The resulting fused data can be used for advanced 3D inspection in Nondestructive Testing and Evaluation (NDT&E applications. The fusion permits the simultaneous visible surface and subsurface inspections to be conducted in the same process. Experimental tests were conducted with different materials. The obtained results are promising and show how these new techniques can be used efficiently in a combined NDT&E-Metrology analysis of manufactured parts, in areas such as aerospace and automotive.

  14. 3D visualisation of the middle ear and adjacent structures using reconstructed multi-slice CT datasets, correlating 3D images and virtual endoscopy to the 2D cross-sectional images

    International Nuclear Information System (INIS)

    Rodt, T.; Ratiu, P.; Kacher, D.F.; Anderson, M.; Jolesz, F.A.; Kikinis, R.; Becker, H.; Bartling, S.

    2002-01-01

    The 3D imaging of the middle ear facilitates better understanding of the patient's anatomy. Cross-sectional slices, however, often allow a more accurate evaluation of anatomical structures, as some detail may be lost through post-processing. In order to demonstrate the advantages of combining both approaches, we performed computed tomography (CT) imaging in two normal and 15 different pathological cases, and the 3D models were correlated to the cross-sectional CT slices. Reconstructed CT datasets were acquired by multi-slice CT. Post-processing was performed using the in-house software ''3D Slicer'', applying thresholding and manual segmentation. 3D models of the individual anatomical structures were generated and displayed in different colours. The display of relevant anatomical and pathological structures was evaluated in the greyscale 2D slices, 3D images, and the 2D slices showing the segmented 2D anatomy in different colours for each structure. Correlating 2D slices to the 3D models and virtual endoscopy helps to combine the advantages of each method. As generating 3D models can be extremely time-consuming, this approach can be a clinically applicable way of gaining a 3D understanding of the patient's anatomy by using models as a reference. Furthermore, it can help radiologists and otolaryngologists evaluating the 2D slices by adding the correct 3D information that would otherwise have to be mentally integrated. The method can be applied to radiological diagnosis, surgical planning, and especially, to teaching. (orig.)

  15. Contributions in compression of 3D medical images and 2D images; Contributions en compression d'images medicales 3D et d'images naturelles 2D

    Energy Technology Data Exchange (ETDEWEB)

    Gaudeau, Y

    2006-12-15

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  16. 3D Printing-Based Integrated Water Quality Sensing System

    Directory of Open Access Journals (Sweden)

    Muinul Banna

    2017-06-01

    Full Text Available The online and accurate monitoring of drinking water supply networks is critically in demand to rapidly detect the accidental or deliberate contamination of drinking water. At present, miniaturized water quality monitoring sensors developed in the laboratories are usually tested under ambient pressure and steady-state flow conditions; however, in Water Distribution Systems (WDS, both the pressure and the flowrate fluctuate. In this paper, an interface is designed and fabricated using additive manufacturing or 3D printing technology—material extrusion (Trade Name: fused deposition modeling, FDM and material jetting—to provide a conduit for miniaturized sensors for continuous online water quality monitoring. The interface is designed to meet two main criteria: low pressure at the inlet of the sensors and a low flowrate to minimize the water bled (i.e., leakage, despite varying pressure from WDS. To meet the above criteria, a two-dimensional computational fluid dynamics model was used to optimize the geometry of the channel. The 3D printed interface, with the embedded miniaturized pH and conductivity sensors, was then tested at different temperatures and flowrates. The results show that the response of the pH sensor is independent of the flowrate and temperature. As for the conductivity sensor, the flowrate and temperature affect only the readings at a very low conductivity (4 µS/cm and high flowrates (30 mL/min, and a very high conductivity (460 µS/cm, respectively.

  17. Quantitative 3-D imaging topogrammetry for telemedicine applications

    Science.gov (United States)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  18. Integration method of 3D MR spectroscopy into treatment planning system for glioblastoma IMRT dose painting with integrated simultaneous boost

    International Nuclear Information System (INIS)

    Ken, Soléakhéna; Cassol, Emmanuelle; Delannes, Martine; Celsis, Pierre; Cohen-Jonathan, Elizabeth Moyal; Laprie, Anne; Vieillevigne, Laure; Franceries, Xavier; Simon, Luc; Supper, Caroline; Lotterie, Jean-Albert; Filleron, Thomas; Lubrano, Vincent; Berry, Isabelle

    2013-01-01

    To integrate 3D MR spectroscopy imaging (MRSI) in the treatment planning system (TPS) for glioblastoma dose painting to guide simultaneous integrated boost (SIB) in intensity-modulated radiation therapy (IMRT). For sixteen glioblastoma patients, we have simulated three types of dosimetry plans, one conventional plan of 60-Gy in 3D conformational radiotherapy (3D-CRT), one 60-Gy plan in IMRT and one 72-Gy plan in SIB-IMRT. All sixteen MRSI metabolic maps were integrated into TPS, using normalization with color-space conversion and threshold-based segmentation. The fusion between the metabolic maps and the planning CT scans were assessed. Dosimetry comparisons were performed between the different plans of 60-Gy 3D-CRT, 60-Gy IMRT and 72-Gy SIB-IMRT, the last plan was targeted on MRSI abnormalities and contrast enhancement (CE). Fusion assessment was performed for 160 transformations. It resulted in maximum differences <1.00 mm for translation parameters and ≤1.15° for rotation. Dosimetry plans of 72-Gy SIB-IMRT and 60-Gy IMRT showed a significantly decreased maximum dose to the brainstem (44.00 and 44.30 vs. 57.01 Gy) and decreased high dose-volumes to normal brain (19 and 20 vs. 23% and 7 and 7 vs. 12%) compared to 60-Gy 3D-CRT (p < 0.05). Delivering standard doses to conventional target and higher doses to new target volumes characterized by MRSI and CE is now possible and does not increase dose to organs at risk. MRSI and CE abnormalities are now integrated for glioblastoma SIB-IMRT, concomitant with temozolomide, in an ongoing multi-institutional phase-III clinical trial. Our method of MR spectroscopy maps integration to TPS is robust and reliable; integration to neuronavigation systems with this method could also improve glioblastoma resection or guide biopsies

  19. 3D ultrasound imaging : Fast and cost-effective morphometry of musculoskeletal tissue

    NARCIS (Netherlands)

    Weide, Guido; Van Der Zwaard, Stephan; Huijing, Peter A.; Jaspers, Richard T.; Harlaar, Jaap

    2017-01-01

    The developmental goal of 3D ultrasound imaging (3DUS) is to engineer a modality to perform 3D morphological ultrasound analysis of human muscles. 3DUS images are constructed from calibrated freehand 2D B-mode ultrasound images, which are positioned into a voxel array. Ultrasound (US) imaging allows

  20. 3D visualization of medical images for personalized learning of human anatomy

    NARCIS (Netherlands)

    Laurence Alpay; Jelle Scheurleer; Harmen Bijwaard

    2015-01-01

    to be held in Lisbon/Portugal on October 15-17, 2015 Medical imaging nowadays often yields high definition 3D images (from CT, PET, MRI, etc.). Usually these images need to be evaluated on 2D monitors. In the transition from 3D to 2D the image becomes more difficult to interpret as a whole. To aid

  1. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    Energy Technology Data Exchange (ETDEWEB)

    Wong, S.T.C. [Univ. of California, San Francisco, CA (United States)

    1997-02-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a {open_quotes}true 3D screen{close_quotes}. To confine the scope, this presentation will not discuss such approaches.

  2. Post-processing methods of rendering and visualizing 3-D reconstructed tomographic images

    International Nuclear Information System (INIS)

    Wong, S.T.C.

    1997-01-01

    The purpose of this presentation is to discuss the computer processing techniques of tomographic images, after they have been generated by imaging scanners, for volume visualization. Volume visualization is concerned with the representation, manipulation, and rendering of volumetric data. Since the first digital images were produced from computed tomography (CT) scanners in the mid 1970s, applications of visualization in medicine have expanded dramatically. Today, three-dimensional (3D) medical visualization has expanded from using CT data, the first inherently digital source of 3D medical data, to using data from various medical imaging modalities, including magnetic resonance scanners, positron emission scanners, digital ultrasound, electronic and confocal microscopy, and other medical imaging modalities. We have advanced from rendering anatomy to aid diagnosis and visualize complex anatomic structures to planning and assisting surgery and radiation treatment. New, more accurate and cost-effective procedures for clinical services and biomedical research have become possible by integrating computer graphics technology with medical images. This trend is particularly noticeable in current market-driven health care environment. For example, interventional imaging, image-guided surgery, and stereotactic and visualization techniques are now stemming into surgical practice. In this presentation, we discuss only computer-display-based approaches of volumetric medical visualization. That is, we assume that the display device available is two-dimensional (2D) in nature and all analysis of multidimensional image data is to be carried out via the 2D screen of the device. There are technologies such as holography and virtual reality that do provide a open-quotes true 3D screenclose quotes. To confine the scope, this presentation will not discuss such approaches

  3. 3D Imaging Technology’s Narrative Appropriation in Cinema

    NARCIS (Netherlands)

    Kiss, Miklós; van den Oever, Annie; Fossati, Giovanna

    2016-01-01

    This chapter traces the cinematic history of stereoscopy by focusing on the contemporary dispute about the values of 3D technology, which are seen as either mere visual attraction or as a technique that perfects the cinematic illusion through increasing perceptual immersion. By taking a neutral

  4. Technology Development for 3-D Wide Swath Imaging Supporting ACE

    Science.gov (United States)

    Racette, Paul; Heymsfield, Gerry; Li, Lihua; Mclinden, Matthew; Park, Richard; Cooley, Michael; Stenger, Pete; Hand, Thomas

    2014-01-01

    The National Academy of Sciences Decadal Survey (DS) Aerosol-Cloud-Ecosystems Mission (ACE) aims to advance our ability to observe and predict changes to the Earth's hydrological cycle and energy balance in response to climate forcing, especially those changes associated with the effects of aerosol on clouds and precipitation. ACE is focused on obtaining measurements to reduce the uncertainties in current climate models arising from the lack in understanding of aerosol-cloud interactions. As part of the mission instrument suite, a dual-frequency radar comprised of a fixed-beam 94 gigahertz (W-band) radar and a wide-swath 35 gigahertz (Ka-band) imaging radar has been recommended by the ACE Science Working Group.In our 2010 Instrument Incubator Program project, we've developed a radar architecture that addresses the challenge associated with achieving the measurement objectives through an innovative, shared aperture antenna that allows dual-frequency radar operation while achieving wide-swath (100 kilometers) imaging at Ka-band. The antenna system incorporates 2 key technologies; a) a novel dual-band reflectorreflectarray and b) a Ka-band Active Electronically Scanned Array (AESA) feed module. The dual-band antenna is comprised of a primary cylindrical reflectorreflectarray surface illuminated by a point-focus W-band feed (compatible with a quasi-optical beam waveguide feed, such as that employed on CloudSat); the Ka-band AESA line feed provides wide-swath across-track scanning. The benefits of this shared-aperture approach include significant reductions in ACE satellite payload size, weight, and cost, as compared to a two aperture approach. Four objectives were addressed in our project. The first entailed developing the tools for the analysis and design of reflectarray antennas, assessment of candidate reflectarray elements, and validation using test coupons. The second objective was to develop a full-scale aperture design utilizing the reflectarray surface and to

  5. Aerial 3D display by use of a 3D-shaped screen with aerial imaging by retro-reflection (AIRR)

    Science.gov (United States)

    Kurokawa, Nao; Ito, Shusei; Yamamoto, Hirotsugu

    2017-06-01

    The purpose of this paper is to realize an aerial 3D display. We design optical system that employs a projector below a retro-reflector and a 3D-shaped screen. A floating 3D image is formed with aerial imaging by retro-reflection (AIRR). Our proposed system is composed of a 3D-shaped screen, a projector, a quarter-wave retarder, a retro-reflector, and a reflective polarizer. Because AIRR forms aerial images that are plane-symmetric of the light sources regarding the reflective polarizer, the shape of the 3D screen is inverted from a desired aerial 3D image. In order to expand viewing angle, the 3D-shaped screen is surrounded by a retro-reflector. In order to separate the aerial image from reflected lights on the retro- reflector surface, the retro-reflector is tilted by 30 degrees. A projector is located below the retro-reflector at the same height of the 3D-shaped screen. The optical axis of the projector is orthogonal to the 3D-shaped screen. Scattered light on the 3D-shaped screen forms the aerial 3D image. In order to demonstrate the proposed optical design, a corner-cube-shaped screen is used for the 3D-shaped screen. Thus, the aerial 3D image is a cube that is floating above the reflective polarizer. For example, an aerial green cube is formed by projecting a calculated image on the 3D-shaped screen. The green cube image is digitally inverted in depth by our developed software. Thus, we have succeeded in forming aerial 3D image with our designed optical system.

  6. Rainbow particle imaging velocimetry for dense 3D fluid velocity imaging

    KAUST Repository

    Xiong, Jinhui

    2017-07-21

    Despite significant recent progress, dense, time-resolved imaging of complex, non-stationary 3D flow velocities remains an elusive goal. In this work we tackle this problem by extending an established 2D method, Particle Imaging Velocimetry, to three dimensions by encoding depth into color. The encoding is achieved by illuminating the flow volume with a continuum of light planes (a

  7. 3D fingerprint imaging system based on full-field fringe projection profilometry

    Science.gov (United States)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  8. Integrating Data Clustering and Visualization for the Analysis of 3D Gene Expression Data

    Energy Technology Data Exchange (ETDEWEB)

    Data Analysis and Visualization (IDAV) and the Department of Computer Science, University of California, Davis, One Shields Avenue, Davis CA 95616, USA,; nternational Research Training Group ``Visualization of Large and Unstructured Data Sets,' ' University of Kaiserslautern, Germany; Computational Research Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720, USA; Genomics Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA; Life Sciences Division, Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley CA 94720, USA,; Computer Science Division,University of California, Berkeley, CA, USA,; Computer Science Department, University of California, Irvine, CA, USA,; All authors are with the Berkeley Drosophila Transcription Network Project, Lawrence Berkeley National Laboratory,; Rubel, Oliver; Weber, Gunther H.; Huang, Min-Yu; Bethel, E. Wes; Biggin, Mark D.; Fowlkes, Charless C.; Hendriks, Cris L. Luengo; Keranen, Soile V. E.; Eisen, Michael B.; Knowles, David W.; Malik, Jitendra; Hagen, Hans; Hamann, Bernd

    2008-05-12

    The recent development of methods for extracting precise measurements of spatial gene expression patterns from three-dimensional (3D) image data opens the way for new analyses of the complex gene regulatory networks controlling animal development. We present an integrated visualization and analysis framework that supports user-guided data clustering to aid exploration of these new complex datasets. The interplay of data visualization and clustering-based data classification leads to improved visualization and enables a more detailed analysis than previously possible. We discuss (i) integration of data clustering and visualization into one framework; (ii) application of data clustering to 3D gene expression data; (iii) evaluation of the number of clusters k in the context of 3D gene expression clustering; and (iv) improvement of overall analysis quality via dedicated post-processing of clustering results based on visualization. We discuss the use of this framework to objectively define spatial pattern boundaries and temporal profiles of genes and to analyze how mRNA patterns are controlled by their regulatory transcription factors.

  9. An analysis of 3D particle path integration algorithms

    International Nuclear Information System (INIS)

    Darmofal, D.L.; Haimes, R.

    1996-01-01

    Several techniques for the numerical integration of particle paths in steady and unsteady vector (velocity) fields are analyzed. Most of the analysis applies to unsteady vector fields, however, some results apply to steady vector field integration. Multistep, multistage, and some hybrid schemes are considered. It is shown that due to initialization errors, many unsteady particle path integration schemes are limited to third-order accuracy in time. Multistage schemes require at least three times more internal data storage than multistep schemes of equal order. However, for timesteps within the stability bounds, multistage schemes are generally more accurate. A linearized analysis shows that the stability of these integration algorithms are determined by the eigenvalues of the local velocity tensor. Thus, the accuracy and stability of the methods are interpreted with concepts typically used in critical point theory. This paper shows how integration schemes can lead to erroneous classification of critical points when the timestep is finite and fixed. For steady velocity fields, we demonstrate that timesteps outside of the relative stability region can lead to similar integration errors. From this analysis, guidelines for accurate timestep sizing are suggested for both steady and unsteady flows. In particular, using simulation data for the unsteady flow around a tapered cylinder, we show that accurate particle path integration requires timesteps which are at most on the order of the physical timescale of the flow

  10. 2D vs. 3D imaging in laparoscopic surgery-results of a prospective randomized trial.

    Science.gov (United States)

    Buia, Alexander; Stockhausen, Florian; Filmann, Natalie; Hanisch, Ernst

    2017-12-01

    3D imaging is an upcoming technology in laparoscopic surgery, and recent studies have shown that the modern 3D technique is superior in an experimental setting. However, the first randomized controlled clinical trial in this context dates back to 1998 and showed no significant difference between 2D and 3D visualization using the first 3D generation technique, which is now more than 15 years old. Positive results measured in an experimental setting considering 3D imaging on surgical performance led us to initiate a randomized controlled pragmatic clinical trial to validate our findings in daily clinical routine. Standard laparoscopic operations (cholecystectomy, appendectomy) were preoperatively randomized to a 2D or 3D imaging system. We used a surgical comfort scale (Likert scale) and the Raw NASA Workload TLX for the subjective assessment of 2D and 3D imaging; the duration of surgery was also measured. The results of 3D imaging were statistically significant better than 2D imaging concerning the parameters "own felt safety" and "task efficiency"; the difficulty level of the procedures in the 2D and 3D groups did not differ. Overall, the Raw NASA Workload TLX showed no significance between the groups. 3D imaging could be a possible advantage in laparoscopic surgery. The results of our clinical trial show increased personal felt safety and efficiency of the surgeon using a 3D imaging system. Overall of the procedures, the findings assessed using Likert scales in terms of own felt safety and task efficiency were statistically significant for 3D imaging. The individually perceived workload assessed with the Raw NASA TLX shows no difference. Although these findings are subjective impressions of the performing surgeons without a clear benefit for 3D technology in clinical outcome, we think that these results show the capability that 3D laparoscopy can have a positive impact while performing laparoscopic procedures.

  11. 3D Inkjet Printed Helical Antenna with Integrated Lens

    KAUST Repository

    Farooqui, Muhammad Fahad; Shamim, Atif

    2016-01-01

    The gain of an antenna can be enhanced through the integration of a lens, although this technique has traditionally been restricted to planar antennas due to fabrication limitations of standard manufacturing processes. Here, through a unique

  12. CISUS: an integrated 3D ultrasound system for IGT using a modular tracking API

    Science.gov (United States)

    Boctor, Emad M.; Viswanathan, Anand; Pieper, Steve; Choti, Michael A.; Taylor, Russell H.; Kikinis, Ron; Fichtinger, Gabor

    2004-05-01

    Ultrasound has become popular in clinical/surgical applications, both as the primary image guidance modality and also in conjunction with other modalities like CT or MRI. Three dimensional ultrasound (3DUS) systems have also demonstrated usefulness in image-guided therapy (IGT). At the same time, however, current lack of open-source and open-architecture multi-modal medical visualization systems prevents 3DUS from fulfilling its potential. Several stand-alone 3DUS systems, like Stradx or In-Vivo exist today. Although these systems have been found to be useful in real clinical setting, it is difficult to augment their functionality and integrate them in versatile IGT systems. To address these limitations, a robotic/freehand 3DUS open environment (CISUS) is being integrated into the 3D Slicer, an open-source research tool developed for medical image analysis and surgical planning. In addition, the system capitalizes on generic application programming interfaces (APIs) for tracking devices and robotic control. The resulting platform-independent open-source system may serve as a valuable tool to the image guided surgery community. Other researchers could straightforwardly integrate the generic CISUS system along with other functionalities (i.e. dual view visualization, registration, real-time tracking, segmentation, etc) to rapidly create their medical/surgical applications. Our current driving clinical application is robotically assisted and freehand 3DUS-guided liver ablation, which is fully being integrated under the CISUS-3D Slicer. Initial functionality and pre-clinical feasibility are demonstrated on phantom and ex-vivo animal models.

  13. CAD-based intelligent robot system integrated with 3D scanning for shoe roughing and cementing

    Directory of Open Access Journals (Sweden)

    Chiu Cheng-Chang

    2017-01-01

    Full Text Available Roughing and cementing are very essential to the process of bonding shoe uppers and the corresponding soles; however, for shoes with complicated design, such as sport shoes, roughing and cementing greatly relied on manual operation. Recently, shoe industry is progressing to 3D design, thus 3D model of the shoe upper and sole will be created before launching into mass production. Taking advantage of the 3D model, this study developed a plug-in program on Rhino 3D CAD platform, which realized the complicated roughing and cementing route planning to be performed by the plug-in program, integrated with real-time 3D scanning information to compensate the planned route, and then converted to working trajectory of robot arm to implement roughing and cementing. The proposed 3D CAD-based intelligent robot arm system integrated with 3D scanning for shoe roughing and cementing is realized and proved to be feasible.

  14. Pseudo-3D Imaging With The DICOM-8

    Science.gov (United States)

    Shalev, S.; Arenson, J.; Kettner, B.

    1985-09-01

    We have developed the DICOM.-8 digital imaging computer for video image acquisition, processing and display. It is a low-cost mobile systems based on a Z80 microcomputer which controls access to two 512 x 512 x 8-bit image planes through a real-time video arithmetic unit. Image presentation capabilities include orthographic images, isometric plots with hidden-line suppression, real-time mask subtraction, binocular red/green stereo, and volumetric imaging with both geometrical and density windows under operator interactive control. Examples are shown for multiplane series of CT images.

  15. Handheld real-time volumetric 3-D gamma-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Haefner, Andrew, E-mail: ahaefner@lbl.gov [Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Barnowski, Ross [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720 (United States); Luke, Paul; Amman, Mark [Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Vetter, Kai [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720 (United States); Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States)

    2017-06-11

    This paper presents the concept of real-time fusion of gamma-ray imaging and visual scene data for a hand-held mobile Compton imaging system in 3-D. The ability to obtain and integrate both gamma-ray and scene data from a mobile platform enables improved capabilities in the localization and mapping of radioactive materials. This not only enhances the ability to localize these materials, but it also provides important contextual information of the scene which once acquired can be reviewed and further analyzed subsequently. To demonstrate these concepts, the high-efficiency multimode imager (HEMI) is used in a hand-portable implementation in combination with a Microsoft Kinect sensor. This sensor, in conjunction with open-source software, provides the ability to create a 3-D model of the scene and to track the position and orientation of HEMI in real-time. By combining the gamma-ray data and visual data, accurate 3-D maps of gamma-ray sources are produced in real-time. This approach is extended to map the location of radioactive materials within objects with unknown geometry.

  16. Estimating 3D tilt from local image cues in natural scenes

    OpenAIRE

    Burge, Johannes; McCann, Brian C.; Geisler, Wilson S.

    2016-01-01

    Estimating three-dimensional (3D) surface orientation (slant and tilt) is an important first step toward estimating 3D shape. Here, we examine how three local image cues from the same location (disparity gradient, luminance gradient, and dominant texture orientation) should be combined to estimate 3D tilt in natural scenes. We collected a database of natural stereoscopic images with precisely co-registered range images that provide the ground-truth distance at each pixel location. We then ana...

  17. Status and perspectives of pixel sensors based on 3D vertical integration

    Energy Technology Data Exchange (ETDEWEB)

    Re, Valerio [Università di Bergamo, Dipartimento di Ingegneria, Viale Marconi, 5, 24044 Dalmine (Italy); INFN, Sezione di Pavia, Via Bassi, 6, 27100 Pavia (Italy)

    2014-11-21

    This paper reviews the most recent developments of 3D integration in the field of silicon pixel sensors and readout integrated circuits. This technology may address the needs of future high energy physics and photon science experiments by increasing the electronic functional density in small pixel readout cells and by stacking various device layers based on different technologies, each optimized for a different function. Current efforts are aimed at improving the performance of both hybrid pixel detectors and of CMOS sensors. The status of these activities is discussed here, taking into account experimental results on 3D devices developed in the frame of the 3D-IC consortium. The paper also provides an overview of the ideas that are being currently devised for novel 3D vertically integrated pixel sensors. - Highlights: • 3D integration is a promising technology for pixel sensors in high energy physics. • Experimental results on two-layer 3D CMOS pixel sensors are presented. • The outcome of the first run from the 3D-IC consortium is discussed. • The AIDA network is studying via-last 3D integration of heterogeneous layers. • New ideas based on 3D vertically integrated pixels are being developed for HEP.

  18. Status and perspectives of pixel sensors based on 3D vertical integration

    International Nuclear Information System (INIS)

    Re, Valerio

    2014-01-01

    This paper reviews the most recent developments of 3D integration in the field of silicon pixel sensors and readout integrated circuits. This technology may address the needs of future high energy physics and photon science experiments by increasing the electronic functional density in small pixel readout cells and by stacking various device layers based on different technologies, each optimized for a different function. Current efforts are aimed at improving the performance of both hybrid pixel detectors and of CMOS sensors. The status of these activities is discussed here, taking into account experimental results on 3D devices developed in the frame of the 3D-IC consortium. The paper also provides an overview of the ideas that are being currently devised for novel 3D vertically integrated pixel sensors. - Highlights: • 3D integration is a promising technology for pixel sensors in high energy physics. • Experimental results on two-layer 3D CMOS pixel sensors are presented. • The outcome of the first run from the 3D-IC consortium is discussed. • The AIDA network is studying via-last 3D integration of heterogeneous layers. • New ideas based on 3D vertically integrated pixels are being developed for HEP

  19. 3D laser imaging for ODOT interstate network at true 1-mm resolution.

    Science.gov (United States)

    2014-12-01

    With the development of 3D laser imaging technology, the latest iteration of : PaveVision3D Ultra can obtain true 1mm resolution 3D data at full-lane coverage in all : three directions at highway speed up to 60MPH. This project provides rapid survey ...

  20. Accuracy of volume measurement using 3D ultrasound and development of CT-3D US image fusion algorithm for prostate cancer radiotherapy

    International Nuclear Information System (INIS)

    Baek, Jihye; Huh, Jangyoung; Hyun An, So; Oh, Yoonjin; Kim, Myungsoo; Kim, DongYoung; Chung, Kwangzoo; Cho, Sungho; Lee, Rena

    2013-01-01

    Purpose: To evaluate the accuracy of measuring volumes using three-dimensional ultrasound (3D US), and to verify the feasibility of the replacement of CT-MR fusion images with CT-3D US in radiotherapy treatment planning. Methods: Phantoms, consisting of water, contrast agent, and agarose, were manufactured. The volume was measured using 3D US, CT, and MR devices. A CT-3D US and MR-3D US image fusion software was developed using the Insight Toolkit library in order to acquire three-dimensional fusion images. The quality of the image fusion was evaluated using metric value and fusion images. Results: Volume measurement, using 3D US, shows a 2.8 ± 1.5% error, 4.4 ± 3.0% error for CT, and 3.1 ± 2.0% error for MR. The results imply that volume measurement using the 3D US devices has a similar accuracy level to that of CT and MR. Three-dimensional image fusion of CT-3D US and MR-3D US was successfully performed using phantom images. Moreover, MR-3D US image fusion was performed using human bladder images. Conclusions: 3D US could be used in the volume measurement of human bladders and prostates. CT-3D US image fusion could be used in monitoring the target position in each fraction of external beam radiation therapy. Moreover, the feasibility of replacing the CT-MR image fusion to the CT-3D US in radiotherapy treatment planning was verified.

  1. 3D Integration of MEMS and IC: Design, technology and simulations

    OpenAIRE

    Schjølberg-Henriksen, Kari

    2009-01-01

    * 3D integration: Opportunities and trends* e-CUBES: Tire pressure monitoring system (TPMS)* Package design including thermo-mechanical modeling* Technology development* Sensor packaging concept* Gold stud bump bonding* Device characterization and testing* Summary and outlook 3D Integration of MEMS and IC: Design, technology and simulations

  2. 3-D IMAGING, ANALYSIS AND MODELLING OF POROUS CEREAL PRODUCTS USING X-RAY MICROTOMOGRAPHY

    Directory of Open Access Journals (Sweden)

    Gerard Van Dalen

    2011-05-01

    Full Text Available Efficient design of multi-component food products containing dry and wet components such as biscuits with a moist fruit filling, is of growing interests for food industry. Technology is needed to prevent or reduce water migration from the moist filling to the dry porous cereal material. This can be done by using moisture barrier systems. Knowledge of the microstructure and its relation to water mobility is necessary to develop stable products. This paper describes a study that uses X-ray microtomography (μCT for the characterisation and visualisation of the 3-D structure of crackers with different porosity, coated biscuit shells and soup inclusions. μCT was used for imaging the inner cellular structure of the cereal matrix or to analyse the integrity of moisture barriers applied on the cereal product. 3-D image analysis methods were developed to obtain quantitative information about the cellular matrix which can be used as input for simulation models for moisture migration. The developed 3-D image analysis method maps the open cellular structure onto a network (graph representation in which the nodes correspond to the pores and the vertices to the pore-topore interconnection. The pores (nodes have properties such as volume, surface area and location whereas the vertices have properties such as direct (open connection and indirect (separated by a single lamella area. To check the segmentation and network description a model for pore to pore resistance was used. The obtained results demonstrate the potential of μCT and 3-D image analysis for extracting structural information which can be used in models for the moisture penetration in a cellular bakery product.

  3. [Accuracy of morphological simulation for orthognatic surgery. Assessment of a 3D image fusion software.

    Science.gov (United States)

    Terzic, A; Schouman, T; Scolozzi, P

    2013-08-06

    The CT/CBCT data allows for 3D reconstruction of skeletal and untextured soft tissue volume. 3D stereophotogrammetry technology has strongly improved the quality of facial soft tissue surface texture. The combination of these two technologies allows for an accurate and complete reconstruction. The 3D virtual head may be used for orthognatic surgical planning, virtual surgery, and morphological simulation obtained with a software dedicated to the fusion of 3D photogrammetric and radiological images. The imaging material include: a multi-slice CT scan or broad field CBCT scan, a 3D photogrammetric camera. The operative image processing protocol includes the following steps: 1) pre- and postoperative CT/CBCT scan and 3D photogrammetric image acquisition; 2) 3D image segmentation and fusion of untextured CT/CBCT skin with the preoperative textured facial soft tissue surface of the 3D photogrammetric scan; 3) image fusion of the pre- and postoperative CT/CBCT data set virtual osteotomies, and 3D photogrammetric soft tissue virtual simulation; 4) fusion of virtual simulated 3D photogrammetric and real postoperative images, and assessment of accuracy using a color-coded scale to measure the differences between the two surfaces. Copyright © 2013. Published by Elsevier Masson SAS.

  4. N=2 3d-matrix integral with Myers term

    International Nuclear Information System (INIS)

    Tomino, Dan

    2004-01-01

    An exact matrix integral is evaluated for a 2x2 3-dimensional matrix model with Myers term. We derive weak and strong coupling expansions of the effective action. We also calculate the expectation values of the quadratic and cubic operators. Implications for non-commutative gauge theory on fuzzy sphere are discussed. (author)

  5. Lagrangian structures, integrability and chaos for 3D dynamical equations

    International Nuclear Information System (INIS)

    Bustamante, Miguel D; Hojman, Sergio A

    2003-01-01

    In this paper, we consider the general setting for constructing action principles for three-dimensional first-order autonomous equations. We present the results for some integrable and non-integrable cases of the Lotka-Volterra equation, and show Lagrangian descriptions which are valid for systems satisfying Shil'nikov criteria on the existence of strange attractors, though chaotic behaviour has not been verified up to now. The Euler-Lagrange equations we get for these systems usually present 'time reparametrization' invariance, though other kinds of invariance may be found according to the kernel of the associated symplectic 2-form. The formulation of a Hamiltonian structure (Poisson brackets and Hamiltonians) for these systems from the Lagrangian viewpoint leads to a method of finding new constants of the motion starting from known ones, which is applied to some systems found in the literature known to possess a constant of the motion, to find the other and thus showing their integrability. In particular, we show that the so-called ABC system is completely integrable if it possesses one constant of the motion

  6. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system.

    Science.gov (United States)

    Baumann, Michael; Mozer, Pierre; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space and thus renders optimization efficient. The method was tested on 237 prostate volumes acquired from 14 different patients for 3D to 3D and 3D to orthogonal 2D slices registration. The 3D-3D version of the algorithm converged correctly in 96.7% of all cases in 6.5s with an accuracy of 1.41mm (r.m.s.) and 3.84mm (max). The 3D to slices method yielded a success rate of 88.9% in 2.3s with an accuracy of 1.37mm (r.m.s.) and 4.3mm (max).

  7. IMAGE-BASED VIRTUAL TOURS AND 3D MODELING OF PAST AND CURRENT AGES FOR THE ENHANCEMENT OF ARCHAEOLOGICAL PARKS: THE VISUALVERSILIA 3D PROJECT

    Directory of Open Access Journals (Sweden)

    C. Castagnetti

    2017-05-01

    Full Text Available The research project VisualVersilia 3D aims at offering a new way to promote the territory and its heritage by matching the traditional reading of the document and the potential use of modern communication technologies for the cultural tourism. Recently, the research on the use of new technologies applied to cultural heritage have turned their attention mainly to technologies to reconstruct and narrate the complexity of the territory and its heritage, including 3D scanning, 3D printing and augmented reality. Some museums and archaeological sites already exploit the potential of digital tools to preserve and spread their heritage but interactive services involving tourists in an immersive and more modern experience are still rare. The innovation of the project consists in the development of a methodology for documenting current and past historical ages and integrating their 3D visualizations with rendering capable of returning an immersive virtual reality for a successful enhancement of the heritage. The project implements the methodology in the archaeological complex of Massaciuccoli, one of the best preserved roman site of the Versilia Area (Tuscany, Italy. The activities of the project briefly consist in developing: 1. the virtual tour of the site in its current configuration on the basis of spherical images then enhanced by texts, graphics and audio guides in order to enable both an immersive and remote tourist experience; 2. 3D reconstruction of the evidences and buildings in their current condition for documentation and conservation purposes on the basis of a complete metric survey carried out through laser scanning; 3. 3D virtual reconstructions through the main historical periods on the basis of historical investigation and the analysis of data acquired.

  8. Image-Based Virtual Tours and 3d Modeling of Past and Current Ages for the Enhancement of Archaeological Parks: the Visualversilia 3d Project

    Science.gov (United States)

    Castagnetti, C.; Giannini, M.; Rivola, R.

    2017-05-01

    The research project VisualVersilia 3D aims at offering a new way to promote the territory and its heritage by matching the traditional reading of the document and the potential use of modern communication technologies for the cultural tourism. Recently, the research on the use of new technologies applied to cultural heritage have turned their attention mainly to technologies to reconstruct and narrate the complexity of the territory and its heritage, including 3D scanning, 3D printing and augmented reality. Some museums and archaeological sites already exploit the potential of digital tools to preserve and spread their heritage but interactive services involving tourists in an immersive and more modern experience are still rare. The innovation of the project consists in the development of a methodology for documenting current and past historical ages and integrating their 3D visualizations with rendering capable of returning an immersive virtual reality for a successful enhancement of the heritage. The project implements the methodology in the archaeological complex of Massaciuccoli, one of the best preserved roman site of the Versilia Area (Tuscany, Italy). The activities of the project briefly consist in developing: 1. the virtual tour of the site in its current configuration on the basis of spherical images then enhanced by texts, graphics and audio guides in order to enable both an immersive and remote tourist experience; 2. 3D reconstruction of the evidences and buildings in their current condition for documentation and conservation purposes on the basis of a complete metric survey carried out through laser scanning; 3. 3D virtual reconstructions through the main historical periods on the basis of historical investigation and the analysis of data acquired.

  9. New series of 3 D lattice integrable models

    International Nuclear Information System (INIS)

    Mangazeev, V.V.; Sergeev, S.M.; Stroganov, Yu.G.

    1993-01-01

    In this paper we present a new series of 3-dimensional integrable lattice models with N colors. The weight functions of the models satisfy modified tetrahedron equations with N states and give a commuting family of two-layer transfer-matrices. The dependence on the spectral parameters corresponds to the static limit of the modified tetrahedron equations and weights are parameterized in terms of elliptic functions. The models contain two free parameters: elliptic modulus and additional parameter η. 12 refs

  10. Automatic extraction of soft tissues from 3D MRI head images using model driven analysis

    International Nuclear Information System (INIS)

    Jiang, Hao; Yamamoto, Shinji; Imao, Masanao.

    1995-01-01

    This paper presents an automatic extraction system (called TOPS-3D : Top Down Parallel Pattern Recognition System for 3D Images) of soft tissues from 3D MRI head images by using model driven analysis algorithm. As the construction of system TOPS we developed, two concepts have been considered in the design of system TOPS-3D. One is the system having a hierarchical structure of reasoning using model information in higher level, and the other is a parallel image processing structure used to extract plural candidate regions for a destination entity. The new points of system TOPS-3D are as follows. (1) The TOPS-3D is a three-dimensional image analysis system including 3D model construction and 3D image processing techniques. (2) A technique is proposed to increase connectivity between knowledge processing in higher level and image processing in lower level. The technique is realized by applying opening operation of mathematical morphology, in which a structural model function defined in higher level by knowledge representation is immediately used to the filter function of opening operation as image processing in lower level. The system TOPS-3D applied to 3D MRI head images consists of three levels. First and second levels are reasoning part, and third level is image processing part. In experiments, we applied 5 samples of 3D MRI head images with size 128 x 128 x 128 pixels to the system TOPS-3D to extract the regions of soft tissues such as cerebrum, cerebellum and brain stem. From the experimental results, the system is robust for variation of input data by using model information, and the position and shape of soft tissues are extracted corresponding to anatomical structure. (author)

  11. Infrared imaging of the polymer 3D-printing process

    Science.gov (United States)

    Dinwiddie, Ralph B.; Kunc, Vlastimil; Lindal, John M.; Post, Brian; Smith, Rachel J.; Love, Lonnie; Duty, Chad E.

    2014-05-01

    Both mid-wave and long-wave IR cameras are used to measure various temperature profiles in thermoplastic parts as they are printed. Two significantly different 3D-printers are used in this study. The first is a small scale commercially available Solidoodle 3 printer, which prints parts with layer thicknesses on the order of 125μm. The second printer used is a "Big Area Additive Manufacturing" (BAAM) 3D-printer developed at Oak Ridge National Laboratory. The BAAM prints parts with a layer thicknesses of 4.06 mm. Of particular interest is the temperature of the previously deposited layer as the new hot layer is about to be extruded onto it. The two layers are expected have a stronger bond if the temperature of the substrate layer is above the glass transition temperature. This paper describes the measurement technique and results for a study of temperature decay and substrate layer temperature for ABS thermoplastic with and without the addition of chopped carbon fibers.

  12. Imaging system for creating 3D block-face cryo-images of whole mice

    Science.gov (United States)

    Roy, Debashish; Breen, Michael; Salvado, Olivier; Heinzel, Meredith; McKinley, Eliot; Wilson, David

    2006-03-01

    We developed a cryomicrotome/imaging system that provides high resolution, high sensitivity block-face images of whole mice or excised organs, and applied it to a variety of biological applications. With this cryo-imaging system, we sectioned cryo-preserved tissues at 2-40 μm thickness and acquired high resolution brightfield and fluorescence images with microscopic in-plane resolution (as good as 1.2 μm). Brightfield images of normal and pathological anatomy show exquisite detail, especially in the abdominal cavity. Multi-planar reformatting and 3D renderings allow one to interrogate 3D structures. In this report, we present brightfield images of mouse anatomy, as well as 3D renderings of organs. For BPK mice model of polycystic kidney disease, we compared brightfield cryo-images and kidney volumes to MRI. The color images provided greater contrast and resolution of cysts as compared to in vivo MRI. We note that color cryo-images are closer to what a researcher sees in dissection, making it easier for them to interpret image data. The combination of field of view, depth of field, ultra high resolution and color/fluorescence contrast enables cryo-image volumes to provide details that cannot be found through in vivo imaging or other ex vivo optical imaging approaches. We believe that this novel imaging system will have applications that include identification of mouse phenotypes, characterization of diseases like blood vessel disease, kidney disease, and cancer, assessment of drug and gene therapy delivery and efficacy and validation of other imaging modalities.

  13. Optimization of PET image quality by means of 3D data acquisition and iterative image reconstruction

    International Nuclear Information System (INIS)

    Doll, J.; Zaers, J.; Trojan, H.; Bellemann, M.E.; Adam, L.E.; Haberkorn, U.; Brix, G.

    1998-01-01

    The experiments were performed at the latest-generation whole-body PET system ECAT EXACT HR + . For 2D data acquisition, a collimator of thin tungsten septa was positioned in the field-of-view. Prior to image reconstruction, the measured 3D data were sorted into 2D sinograms by using the Fourier rebinning (FORE) algorithm developed by M. Defrise. The standard filtered backprojection (FBP) method and an optimized ML/EM algorithm with overrelaxation for accelerated convergence were employed for image reconstruction. The spatial resolution of both methods as well as the convergence and noise properties of the ML/EM algorithm were studied in phantom measurements. Furthermore, patient data were acquired in the 2D mode as well as in the 3D mode and reconstructed with both techniques. At the same spatial resolution, the ML/EM-reconstructed images showed fewer and less prominent artefacts than the FBP-reconstructed images. The resulting improved detail conspicuously was achieved for the data acquired in the 2D mode as well as in the 3D mode. The best image quality was obtained by iterative 2D reconstruction of 3D data sets which were previously rebinned into 2D sinograms with help of the FORE algorithm. The phantom measurements revealed that 50 iteration steps with the otpimized ML/EM algorithm were sufficient to keep the relative quantitation error below 5%. (orig./MG) [de

  14. A small animal image guided irradiation system study using 3D dosimeters

    International Nuclear Information System (INIS)

    Qian, Xin; Wuu, Cheng-Shie; Admovics, John

    2015-01-01

    In a high resolution image-guided small animal irradiation platform, a cone beam computed tomography (CBCT) is integrated with an irradiation unit for precise targeting. Precise quality assurance is essential for both imaging and irradiation components. The conventional commissioning techniques with films face major challenges due to alignment uncertainty and labour intensive film preparation and scanning. In addition, due to the novel design of this platform the mouse stage rotation for CBCT imaging is perpendicular to the gantry rotation for irradiation. Because these two rotations are associated with different mechanical systems, discrepancy between rotation isocenters exists. In order to deliver x-ray precisely, it is essential to verify coincidence of the imaging and the irradiation isocenters. A 3D PRESAGE dosimeter can provide an excellent tool for checking dosimetry and verifying coincidence of irradiation and imaging coordinates in one system. Dosimetric measurements were performed to obtain beam profiles and percent depth dose (PDD). Isocentricity and coincidence of the mouse stage and gantry rotations were evaluated with starshots acquired using PRESAGE dosimeters. A single PRESAGE dosimeter can provide 3 -D information in both geometric and dosimetric uncertainty, which is crucial for translational studies

  15. High speed display algorithm for 3D medical images using Multi Layer Range Image

    International Nuclear Information System (INIS)

    Ban, Hideyuki; Suzuki, Ryuuichi

    1993-01-01

    We propose high speed algorithm that display 3D voxel images obtained from medical imaging systems such as MRI. This algorithm convert voxel image data to 6 Multi Layer Range Image (MLRI) data, which is an augmentation of the range image data. To avoid the calculation for invisible voxels, the algorithm selects at most 3 MLRI data from 6 in accordance with the view direction. The proposed algorithm displays 256 x 256 x 256 voxel data within 0.6 seconds using 22 MIPS Workstation without a special hardware such as Graphics Engine. Real-time display will be possible on 100 MIPS class Workstation by our algorithm. (author)

  16. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    Science.gov (United States)

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  17. D3D augmented reality imaging system: proof of concept in mammography.

    Science.gov (United States)

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called "depth 3-dimensional (D3D) augmented reality". A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice.

  18. Phase aided 3D imaging and modeling: dedicated systems and case studies

    Science.gov (United States)

    Yin, Yongkai; He, Dong; Liu, Zeyi; Liu, Xiaoli; Peng, Xiang

    2014-05-01

    Dedicated prototype systems for 3D imaging and modeling (3DIM) are presented. The 3D imaging systems are based on the principle of phase-aided active stereo, which have been developed in our laboratory over the past few years. The reported 3D imaging prototypes range from single 3D sensor to a kind of optical measurement network composed of multiple node 3D-sensors. To enable these 3D imaging systems, we briefly discuss the corresponding calibration techniques for both single sensor and multi-sensor optical measurement network, allowing good performance of the 3DIM prototype systems in terms of measurement accuracy and repeatability. Furthermore, two case studies including the generation of high quality color model of movable cultural heritage and photo booth from body scanning are presented to demonstrate our approach.

  19. New solutions and applications of 3D computer tomography image processing

    Science.gov (United States)

    Effenberger, Ira; Kroll, Julia; Verl, Alexander

    2008-02-01

    As nowadays the industry aims at fast and high quality product development and manufacturing processes a modern and efficient quality inspection is essential. Compared to conventional measurement technologies, industrial computer tomography (CT) is a non-destructive technology for 3D-image data acquisition which helps to overcome their disadvantages by offering the possibility to scan complex parts with all outer and inner geometric features. In this paper new and optimized methods for 3D image processing, including innovative ways of surface reconstruction and automatic geometric feature detection of complex components, are presented, especially our work of developing smart online data processing and data handling methods, with an integrated intelligent online mesh reduction. Hereby the processing of huge and high resolution data sets is guaranteed. Besides, new approaches for surface reconstruction and segmentation based on statistical methods are demonstrated. On the extracted 3D point cloud or surface triangulation automated and precise algorithms for geometric inspection are deployed. All algorithms are applied to different real data sets generated by computer tomography in order to demonstrate the capabilities of the new tools. Since CT is an emerging technology for non-destructive testing and inspection more and more industrial application fields will use and profit from this new technology.

  20. Highly functional tunnelling devices integrated in 3D

    DEFF Research Database (Denmark)

    Wernersson, Lars-Erik; Lind, Erik; Lindström, Peter

    2003-01-01

    a new type of tunnelling transistor, namely a resonant-tunnelling permeable base transistor. A simple model based on a piece-wise linear approximation is used in Cadence to describe the current-voltage characteristics of the transistor. This model is further introduced into a small signal equivalent...... simultaneously on both tunnelling structures and the obtained characteristics are the result of the interplay between the two tunnelling structures and the gate. An equivalent circuit model is developed and we show how this interaction influences the current-voltage characteristics. The gate may be used......We present a new technology for integrating tunnelling devices in three dimensions. These devices are fabricated by the combination of the growth of semiconductor heterostructures with the controlled introduction of metallic elements into an epitaxial layer by an overgrowth technique. First, we use...

  1. NEW INSTRUMENTS FOR SURVEY: ON LINE SOFTWARES FOR 3D RECONTRUCTION FROM IMAGES

    Directory of Open Access Journals (Sweden)

    E. Fratus de Balestrini

    2012-09-01

    Full Text Available 3d scanning technologies had a significant development and have been widely used in documentation of cultural, architectural and archeological heritages. Modern methods of three-dimensional acquiring and modeling allow to represent an object through a digital model that combines visual potentialities of images (normally used for documentation to the accuracy of the survey, becoming at the same time support for the visualization that for metric evaluation of any artefact that have an historical or artistic interest, opening up new possibilities for cultural heritage's fruition, cataloging and study. Despite this development, because of the small catchment area and the 3D laser scanner's sophisticated technologies, the cost of these instruments is very high and beyond the reach of most operators in the field of cultural heritages. This is the reason why they have appeared low-cost technologies or even free, allowing anyone to approach the issues of acquisition and 3D modeling, providing tools that allow to create three-dimensional models in a simple and economical way. The research, conducted by the Laboratory of Photogrammetry of the University IUAV of Venice, of which we present here some results, is intended to figure out whether, with Arc3D, it is possible to obtain results that can be somehow comparable, in therms of overall quality, to those of the laser scanner, and/or whether it is possible to integrate them. They were carried out a series of tests on certain types of objects, models made with Arc3D, from raster images, were compared with those obtained using the point clouds from laser scanner. We have also analyzed the conditions for an optimal use of Arc3D: environmental conditions (lighting, acquisition tools (digital cameras and type and size of objects. After performing the tests described above, we analyzed the patterns generated by Arc3D to check what other graphic representations can be obtained from them: orthophotos and drawings

  2. High Frame Rate Synthetic Aperture 3D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Holbek, Simon; Stuart, Matthias Bo

    2016-01-01

    , current volumetric ultrasonic flow methods are limited to one velocity component or restricted to a reduced field of view (FOV), e.g. fixed imaging planes, in exchange for higher temporal resolutions. To solve these problems, a previously proposed accurate 2-D high frame rate vector flow imaging (VFI...

  3. QuickPALM: 3D real-time photoactivation nanoscopy image processing in ImageJ

    CSIR Research Space (South Africa)

    Henriques, R

    2010-05-01

    Full Text Available QuickPALM in conjunction with the acquisition of control features provides a complete solution for the acquisition, reconstruction and visualization of 3D PALM or STORM images, achieving resolutions of ~40 nm in real time. This software package...

  4. The Mathematical Foundations of 3D Compton Scatter Emission Imaging

    Directory of Open Access Journals (Sweden)

    T. T. Truong

    2007-01-01

    Full Text Available The mathematical principles of tomographic imaging using detected (unscattered X- or gamma-rays are based on the two-dimensional Radon transform and many of its variants. In this paper, we show that two new generalizations, called conical Radon transforms, are related to three-dimensional imaging processes based on detected Compton scattered radiation. The first class of conical Radon transform has been introduced recently to support imaging principles of collimated detector systems. The second class is new and is closely related to the Compton camera imaging principles and invertible under special conditions. As they are poised to play a major role in future designs of biomedical imaging systems, we present an account of their most important properties which may be relevant for active researchers in the field.

  5. Advanced 3-D Ultrasound Imaging: 3-D Synthetic Aperture Imaging using Fully Addressed and Row-Column Addressed 2-D Transducer Arrays

    DEFF Research Database (Denmark)

    Bouzari, Hamed

    the important diagnostic information in a noninvasive manner. Diagnostic and therapeutic decisions often require accurate estimates of e.g., organ, cyst, or tumor volumes. 3-D ultrasound imaging can provide these measurements without relying on the geometrical assumptions and operator-dependent skills involved...... is one of the factors for the widespread use of ultrasound imaging. The high price tag on the high quality 3-D scanners is limiting their market share. Row-column addressing of 2-D transducer arrays is a low cost alternative to fully addressed 2-D arrays, for 3-D ultrasound imaging. Using row....... Based on a set of acoustical measurements the center frequency, bandwidth, surface pressure, sensitivity, and acoustical cross-talks were evaluated and discussed. The imaging quality assessments were carried out based on Field II simulations as well as phantom measurements. Moreover, an analysis...

  6. Acoustic 3D modeling by the method of integral equations

    Science.gov (United States)

    Malovichko, M.; Khokhlov, N.; Yavich, N.; Zhdanov, M.

    2018-02-01

    This paper presents a parallel algorithm for frequency-domain acoustic modeling by the method of integral equations (IE). The algorithm is applied to seismic simulation. The IE method reduces the size of the problem but leads to a dense system matrix. A tolerable memory consumption and numerical complexity were achieved by applying an iterative solver, accompanied by an effective matrix-vector multiplication operation, based on the fast Fourier transform (FFT). We demonstrate that, the IE system matrix is better conditioned than that of the finite-difference (FD) method, and discuss its relation to a specially preconditioned FD matrix. We considered several methods of matrix-vector multiplication for the free-space and layered host models. The developed algorithm and computer code were benchmarked against the FD time-domain solution. It was demonstrated that, the method could accurately calculate the seismic field for the models with sharp material boundaries and a point source and receiver located close to the free surface. We used OpenMP to speed up the matrix-vector multiplication, while MPI was used to speed up the solution of the system equations, and also for parallelizing across multiple sources. The practical examples and efficiency tests are presented as well.

  7. 4-mm-diameter three-dimensional imaging endoscope with steerable camera for minimally invasive surgery (3-D-MARVEL).

    Science.gov (United States)

    Bae, Sam Y; Korniski, Ronald J; Shearn, Michael; Manohara, Harish M; Shahinian, Hrayr

    2017-01-01

    High-resolution three-dimensional (3-D) imaging (stereo imaging) by endoscopes in minimally invasive surgery, especially in space-constrained applications such as brain surgery, is one of the most desired capabilities. Such capability exists at larger than 4-mm overall diameters. We report the development of a stereo imaging endoscope of 4-mm maximum diameter, called Multiangle, Rear-Viewing Endoscopic Tool (MARVEL) that uses a single-lens system with complementary multibandpass filter (CMBF) technology to achieve 3-D imaging. In addition, the system is endowed with the capability to pan from side-to-side over an angle of [Formula: see text], which is another unique aspect of MARVEL for such a class of endoscopes. The design and construction of a single-lens, CMBF aperture camera with integrated illumination to generate 3-D images, and the actuation mechanism built into it is summarized.

  8. OMEGAPIX 3D integrated circuit prototype dedicated to the ATLAS upgrade Super LHC pixel project

    CERN Document Server

    Thienpont, D; de La Taille, C; Seguin-Moreau, N; Martin-Chassard, G; Guo b, Y

    2009-01-01

    In late 2008, an international consortium for development of vertically integrated (3D) readout electronics was created to explore features available from this technology. In this paper, the OMEGAPIX circuit is presented. It is the first front-end ASIC prototype designed at LAL in 3D technology. It has been submitted on May 2009. At first, a short reminder of 3D technology is presented. Then the IC design is explained: analogue tier, digital tier and testability.

  9. Analytic 3D image reconstruction using all detected events

    International Nuclear Information System (INIS)

    Kinahan, P.E.; Rogers, J.G.

    1988-11-01

    We present the results of testing a previously presented algorithm for three-dimensional image reconstruction that uses all gamma-ray coincidence events detected by a PET volume-imaging scanner. By using two iterations of an analytic filter-backprojection method, the algorithm is not constrained by the requirement of a spatially invariant detector point spread function, which limits normal analytic techniques. Removing this constraint allows the incorporation of all detected events, regardless of orientation, which improves the statistical quality of the final reconstructed image

  10. Three-Dimensional Integrated Circuit (3D IC) Key Technology: Through-Silicon Via (TSV).

    Science.gov (United States)

    Shen, Wen-Wei; Chen, Kuan-Neng

    2017-12-01

    3D integration with through-silicon via (TSV) is a promising candidate to perform system-level integration with smaller package size, higher interconnection density, and better performance. TSV fabrication is the key technology to permit communications between various strata of the 3D integration system. TSV fabrication steps, such as etching, isolation, metallization processes, and related failure modes, as well as other characterizations are discussed in this invited review paper.

  11. Multiresolution 3-D reconstruction from side-scan sonar images.

    Science.gov (United States)

    Coiras, Enrique; Petillot, Yvan; Lane, David M

    2007-02-01

    In this paper, a new method for the estimation of seabed elevation maps from side-scan sonar images is presented. The side-scan image formation process is represented by a Lambertian diffuse model, which is then inverted by a multiresolution optimization procedure inspired by expectation-maximization to account for the characteristics of the imaged seafloor region. On convergence of the model, approximations for seabed reflectivity, side-scan beam pattern, and seabed altitude are obtained. The performance of the system is evaluated against a real structure of known dimensions. Reconstruction results for images acquired by different sonar sensors are presented. Applications to augmented reality for the simulation of targets in sonar imagery are also discussed.

  12. Cytology 3D structure formation based on optical microscopy images

    Science.gov (United States)

    Pronichev, A. N.; Polyakov, E. V.; Shabalova, I. P.; Djangirova, T. V.; Zaitsev, S. M.

    2017-01-01

    The article the article is devoted to optimization of the parameters of imaging of biological preparations in optical microscopy using a multispectral camera in visible range of electromagnetic radiation. A model for the image forming of virtual preparations was proposed. The optimum number of layers was determined for the object scan in depth and holistic perception of its switching according to the results of the experiment.

  13. Cytology 3D structure formation based on optical microscopy images

    International Nuclear Information System (INIS)

    Pronichev, A N; Polyakov, E V; Zaitsev, S M; Shabalova, I P; Djangirova, T V

    2017-01-01

    The article the article is devoted to optimization of the parameters of imaging of biological preparations in optical microscopy using a multispectral camera in visible range of electromagnetic radiation. A model for the image forming of virtual preparations was proposed. The optimum number of layers was determined for the object scan in depth and holistic perception of its switching according to the results of the experiment. (paper)

  14. Pragmatic fully 3D image reconstruction for the MiCES mouse imaging PET scanner

    International Nuclear Information System (INIS)

    Lee, Kisung; Kinahan, Paul E; Fessler, Jeffrey A; Miyaoka, Robert S; Janes, Marie; Lewellen, Tom K

    2004-01-01

    We present a pragmatic approach to image reconstruction for data from the micro crystal elements system (MiCES) fully 3D mouse imaging positron emission tomography (PET) scanner under construction at the University of Washington. Our approach is modelled on fully 3D image reconstruction used in clinical PET scanners, which is based on Fourier rebinning (FORE) followed by 2D iterative image reconstruction using ordered-subsets expectation-maximization (OSEM). The use of iterative methods allows modelling of physical effects (e.g., statistical noise, detector blurring, attenuation, etc), while FORE accelerates the reconstruction process by reducing the fully 3D data to a stacked set of independent 2D sinograms. Previous investigations have indicated that non-stationary detector point-spread response effects, which are typically ignored for clinical imaging, significantly impact image quality for the MiCES scanner geometry. To model the effect of non-stationary detector blurring (DB) in the FORE+OSEM(DB) algorithm, we have added a factorized system matrix to the ASPIRE reconstruction library. Initial results indicate that the proposed approach produces an improvement in resolution without an undue increase in noise and without a significant increase in the computational burden. The impact on task performance, however, remains to be evaluated

  15. MR imaging in epilepsy with use of 3D MP-RAGE

    International Nuclear Information System (INIS)

    Tanaka, Akio; Ohno, Sigeru; Sei, Tetsuro; Kanazawa, Susumu; Yasui, Koutaro; Kuroda, Masahiro; Hiraki, Yoshio; Oka, Eiji

    1996-01-01

    The patients were 40 males and 33 females; their ages ranged from 1 month to 39 years (mean: 15.7 years). The patients underwent MR imaging, including spin-echo T 1 -weighted, turbo spin-echo proton density/T 2 -weighted, and 3D magnetization-prepared rapid gradient-echo (3D MP-RAGE) images. These examinations disclosed 39 focal abnormalities. On visual evaluation, the boundary of abnormal gray matter in the neuronal migration disorder (NMD) cases was most clealy shown on 3D MP-RAGE images as compared to the other images. This is considered to be due to the higher spatial resolution and the better contrast of the 3D MP-RAGE images than those of the other techniques. The relative contrast difference between abnormal gray matter and the adjacent white matter was also assessed. The results revealed that the contrast differences on the 3D MP-RAGE images were larger than those on the other images; this was statistically significant. Although the sensitivity of 3D MP-RAGE for NMD was not specifically evaluated in this study, the possibility of this disorder, in cases suspected on other images, could be ruled out. Thus, it appears that the specificity with respect to NMD was at least increased with us of 3D MP-RAGE. 3D MP-RAGE also enabled us to build three-dimensional surface models that were helpful in understanding the three-dimensional anatomy. Furthermore. 3D MP-RAGE was considered to be the best technique for evaluating hippocampus atrophy in patients with MTS. On the other hand, the sensitivity in the signal change of the hippocampus was higher on T 2 -weighted images. In addition, demonstration of cortical tubers of tuberous sclerosis in neurocutaneous syndrome was superior on T 2 -weighted images than on 3D MP-RAGE images. (K.H.)

  16. Analysis of information for cerebrovascular disorders obtained by 3D MR imaging

    International Nuclear Information System (INIS)

    Yoshikawa, Kohki; Yoshioka, Naoki; Watanabe, Fumio; Shiono, Takahiro; Sugishita, Morihiro; Umino, Kazunori.

    1995-01-01

    Recently, it becomes easy to analyze information obtained by 3D MR imaging due to remarkable progress of fast MR imaging technique and analysis tool. Six patients suffered from aphasia (4 cerebral infarctions and 2 bleedings) were performed 3D MR imaging (3D FLASH-TR/TE/flip angle; 20-50 msec/6-10 msec/20-30 degrees) and their volume information were analyzed by multiple projection reconstruction (MPR), surface rendering 3D reconstruction, and volume rendering 3D reconstruction using Volume Design PRO (Medical Design Co., Ltd.). Four of them were diagnosed as Broca's aphasia clinically and their lesions could be detected around the cortices of the left inferior frontal gyrus. Another 2 patients were diagnosed as Wernicke's aphasia and the lesions could be detected around the cortices of the left supramarginal gyrus. This technique for 3D volume analyses would provide quite exact locational information about cerebral cortical lesions. (author)

  17. Analysis of information for cerebrovascular disorders obtained by 3D MR imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yoshikawa, Kohki [Tokyo Univ. (Japan). Inst. of Medical Science; Yoshioka, Naoki; Watanabe, Fumio; Shiono, Takahiro; Sugishita, Morihiro; Umino, Kazunori

    1995-12-01

    Recently, it becomes easy to analyze information obtained by 3D MR imaging due to remarkable progress of fast MR imaging technique and analysis tool. Six patients suffered from aphasia (4 cerebral infarctions and 2 bleedings) were performed 3D MR imaging (3D FLASH-TR/TE/flip angle; 20-50 msec/6-10 msec/20-30 degrees) and their volume information were analyzed by multiple projection reconstruction (MPR), surface rendering 3D reconstruction, and volume rendering 3D reconstruction using Volume Design PRO (Medical Design Co., Ltd.). Four of them were diagnosed as Broca`s aphasia clinically and their lesions could be detected around the cortices of the left inferior frontal gyrus. Another 2 patients were diagnosed as Wernicke`s aphasia and the lesions could be detected around the cortices of the left supramarginal gyrus. This technique for 3D volume analyses would provide quite exact locational information about cerebral cortical lesions. (author).

  18. Radar Imaging of Spheres in 3D using MUSIC

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, D H; Berryman, J G

    2003-01-21

    We have shown that multiple spheres can be imaged by linear and planar EM arrays using only one component of polarization. The imaging approach involves calculating the SVD of the scattering response matrix, selecting a subset of singular values that represents noise, and evaluating the MUSIC functional. The noise threshold applied to the spectrum of singular values for optimal performance is typically around 1%. The resulting signal subspace includes more than one singular value per sphere. The presence of reflections from the ground improves height localization, even for a linear array parallel to the ground. However, the interference between direct and reflected energy modulates the field, creating periodic nulls that can obscure targets in typical images. These nulls are largely eliminated by normalizing the MUSIC functional with the broadside beam pattern of the array. The resulting images show excellent localization for 1 and 2 spheres. The performance for the 3 sphere configurations are complicated by shadowing effects and the greater range of the 3rd sphere in case 2. Two of the three spheres are easily located by MUSIC but the third is difficult to distinguish from other local maxima of the complex imaging functional. Improvement is seen when the linear array is replace with a planar array, which increases the effective aperture height. Further analysis of the singular values and their relationship to modes of scattering from the spheres, as well as better ways to exploit polarization, should improve performance. Work along these lines is currently being pursued by the authors.

  19. Towards functional 3D T-ray imaging

    International Nuclear Information System (INIS)

    Ferguson, Bradley; Wang, Shaohong; Gray, Doug; Abbott, Derek; Zhang, X-C

    2002-01-01

    We review the recent development of T-ray computed tomography, a terahertz imaging technique that allows the reconstruction of the three-dimensional refractive index profile of weakly scattering objects. Terahertz pulse imaging is used to obtain images of the target at multiple projection angles and the filtered backprojection algorithm enables the reconstruction of the object's frequency-dependent refractive index. The application of this technique to a biological bone sample and a plastic test structure is demonstrated. The structure of each target is accurately resolved and the frequency-dependent refractive index is determined. The frequency-dependent information may potentially be used to extract functional information from the target, to uniquely identify different materials or to diagnose medical conditions

  20. A monthly quality assurance procedure for 3D surface imaging.

    Science.gov (United States)

    Wooten, H Omar; Klein, Eric E; Gokhroo, Garima; Santanam, Lakshmi

    2010-12-21

    A procedure for periodic quality assurance of a video surface imaging system is introduced. AlignRT is a video camera-based patient localization system that captures and compares images of a patient's topography to a DICOM-formatted external contour, then calculates shifts required to accurately reposition the patient. This technical note describes the tools and methods implemented in our department to verify correct and accurate operation of the AlignRT hardware and software components. The procedure described is performed monthly and complements a daily calibration of the system.

  1. Integrating 3D seismic curvature and curvature gradient attributes for fracture characterization: Methodologies and interpretational implications

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Dengliang

    2013-03-01

    In 3D seismic interpretation, curvature is a popular attribute that depicts the geometry of seismic reflectors and has been widely used to detect faults in the subsurface; however, it provides only part of the solutions to subsurface structure analysis. This study extends the curvature algorithm to a new curvature gradient algorithm, and integrates both algorithms for fracture detection using a 3D seismic test data set over Teapot Dome (Wyoming). In fractured reservoirs at Teapot Dome known to be formed by tectonic folding and faulting, curvature helps define the crestal portion of the reservoirs that is associated with strong seismic amplitude and high oil productivity. In contrast, curvature gradient helps better define the regional northwest-trending and the cross-regional northeast-trending lineaments that are associated with weak seismic amplitude and low oil productivity. In concert with previous reports from image logs, cores, and outcrops, the current study based on an integrated seismic curvature and curvature gradient analysis suggests that curvature might help define areas of enhanced potential to form tensile fractures, whereas curvature gradient might help define zones of enhanced potential to develop shear fractures. In certain fractured reservoirs such as at Teapot Dome where faulting and fault-related folding contribute dominantly to the formation and evolution of fractures, curvature and curvature gradient attributes can be potentially applied to differentiate fracture mode, to predict fracture intensity and orientation, to detect fracture volume and connectivity, and to model fracture networks.

  2. Effects of intra-operative fluoroscopic 3D-imaging on peri-operative imaging strategy in calcaneal fracture surgery.

    Science.gov (United States)

    Beerekamp, M S H; Backes, M; Schep, N W L; Ubbink, D T; Luitse, J S; Schepers, T; Goslings, J C

    2017-12-01

    Previous studies demonstrated that intra-operative fluoroscopic 3D-imaging (3D-imaging) in calcaneal fracture surgery is promising to prevent revision surgery and save costs. However, these studies limited their focus to corrections performed after 3D-imaging, thereby neglecting corrections after intra-operative fluoroscopic 2D-imaging (2D-imaging). The aim of this study was to assess the effects of additional 3D-imaging on intra-operative corrections, peri-operative imaging used, and patient-relevant outcomes compared to 2D-imaging alone. In this before-after study, data of adult patients who underwent open reduction and internal fixation (ORIF) of a calcaneal fracture between 2000 and 2014 in our level-I Trauma center were collected. 3D-imaging (BV Pulsera with 3D-RX, Philips Healthcare, Best, The Netherlands) was available as of 2007 at the surgeons' discretion. Patient and fracture characteristics, peri-operative imaging, intra-operative corrections and patient-relevant outcomes were collected from the hospital databases. Patients in whom additional 3D-imaging was applied were compared to those undergoing 2D-imaging alone. A total of 231 patients were included of whom 107 (46%) were operated with the use of 3D-imaging. No significant differences were found in baseline characteristics. The median duration of surgery was significantly longer when using 3D-imaging (2:08 vs. 1:54 h; p = 0.002). Corrections after additional 3D-imaging were performed in 53% of the patients. However, significantly fewer corrections were made after 2D-imaging when 3D-imaging was available (Risk difference (RD) -15%; 95% Confidence interval (CI) -29 to -2). Peri-operative imaging, besides intra-operative 3D-imaging, and patient-relevant outcomes were similar between groups. Intra-operative 3D-imaging provides additional information resulting in additional corrections. Moreover, 3D-imaging probably changed the surgeons' attitude to rely more on 3D-imaging, hence a 15%-decrease of

  3. GOTHIC CHURCHES IN PARIS ST GERVAIS ET ST PROTAIS IMAGE MATCHING 3D RECONSTRUCTION TO UNDERSTAND THE VAULTS SYSTEM GEOMETRY

    Directory of Open Access Journals (Sweden)

    M. Capone

    2015-02-01

    benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  4. 3D mutifractal analysis of cerebral tomoscintigraphy images

    International Nuclear Information System (INIS)

    Lopes, R.; Dubois, P.; Dewalle, A.S.; Betrouni, N.; Steinling, M.; Maouche, S.

    2007-01-01

    In this study, we describe the preliminary results of a tool to assist the diagnosis for the characterization of pathological cases of epilepsy disease using cerebral tomoscintigraphy images. The tool is based on the use of multifractal modelling to detect the local changes of homogeneity. (orig.)

  5. 3D microscopic imaging and evaluation of tubular tissue architecture

    Czech Academy of Sciences Publication Activity Database

    Janáček, Jiří; Čapek, Martin; Michálek, Jan; Karen, Petr; Kubínová, Lucie

    2014-01-01

    Roč. 63, Suppl.1 (2014), S49-S55 ISSN 0862-8408 R&D Projects: GA MŠk(CZ) LH13028; GA ČR(CZ) GA13-12412S Institutional support: RVO:67985823 Keywords : confocal microscopy * capillaries * brain * skeletal muscle * image analysis Subject RIV: EA - Cell Biology Impact factor: 1.293, year: 2014

  6. Revolving SEM images visualising 3D taxonomic characters

    DEFF Research Database (Denmark)

    Akkari, Nesrine; Cheung, David Koon-Bong; Enghoff, Henrik

    2013-01-01

    images taken consecutively while rotating the SEM stage 360°, which allows the structure in question to be seen from all angles of view in one plane. Seven new species of the genus Ommatoiulus collected in Tunisia are described: O. chambiensis, O. crassinigripes, O. kefi, O. khroumiriensis, O. xerophilus...

  7. State-of-The-Art and Applications of 3D Imaging Sensors in Industry, Cultural Heritage, Medicine, and Criminal Investigation.

    Science.gov (United States)

    Sansoni, Giovanna; Trebeschi, Marco; Docchio, Franco

    2009-01-01

    3D imaging sensors for the acquisition of three dimensional (3D) shapes have created, in recent years, a considerable degree of interest for a number of applications. The miniaturization and integration of the optical and electronic components used to build them have played a crucial role in the achievement of compactness, robustness and flexibility of the sensors. Today, several 3D sensors are available on the market, even in combination with other sensors in a "sensor fusion" approach. An importance equal to that of physical miniaturization has the portability of the measurements, via suitable interfaces, into software environments designed for their elaboration, e.g., CAD-CAM systems, virtual renders, and rapid prototyping tools. In this paper, following an overview of the state-of-art of 3D imaging sensors, a number of significant examples of their use are presented, with particular reference to industry, heritage, medicine, and criminal investigation applications.

  8. Comparison of post-contrast 3D-T1-MPRAGE, 3D-T1-SPACE and 3D-T2-FLAIR MR images in evaluation of meningeal abnormalities at 3-T MRI.

    Science.gov (United States)

    Jeevanandham, Balaji; Kalyanpur, Tejas; Gupta, Prashant; Cherian, Mathew

    2017-06-01

    This study was to assess the usefulness of newer three-dimensional (3D)-T 1 sampling perfection with application optimized contrast using different flip-angle evolutions (SPACE) and 3D-T 2 fluid-attenuated inversion recovery (FLAIR) sequences in evaluation of meningeal abnormalities. 78 patients who presented with high suspicion of meningeal abnormalities were evaluated using post-contrast 3D-T 2 -FLAIR, 3D-T 1 magnetization-prepared rapid gradient-echo (MPRAGE) and 3D-T 1 -SPACE sequences. The images were evaluated independently by two radiologists for cortical gyral, sulcal space, basal cisterns and dural enhancement. The diagnoses were confirmed by further investigations including histopathology. Post-contrast 3D-T 1 -SPACE and 3D-T 2 -FLAIR images yielded significantly more information than MPRAGE images (p evaluation of meningeal abnormalities and when used in combination have the maximum sensitivity for leptomeningeal abnormalities. The negative-predictive value is nearly 100%, where no leptomeningeal abnormality was detected on these sequences. Advances in knowledge: Post-contrast 3D-T 1 -SPACE and 3D-T 2 -FLAIR images are more useful than 3D-T 1 -MPRAGE images in evaluation of meningeal abnormalities.

  9. Experiment for Integrating Dutch 3d Spatial Planning and Bim for Checking Building Permits

    Science.gov (United States)

    van Berlo, L.; Dijkmans, T.; Stoter, J.

    2013-09-01

    This paper presents a research project in The Netherlands in which several SMEs collaborated to create a 3D model of the National spatial planning information. This 2D information system described in the IMRO data standard holds implicit 3D information that can be used to generate an explicit 3D model. The project realized a proof of concept to generate a 3D spatial planning model. The team used the model to integrate it with several 3D Building Information Models (BIMs) described in the open data standard Industry Foundation Classes (IFC). Goal of the project was (1) to generate a 3D BIM model from spatial planning information to be used by the architect during the early design phase, and (2) allow 3D checking of building permits. The team used several technologies like CityGML, BIM clash detection and GeoBIM to explore the potential of this innovation. Within the project a showcase was created with a part of the spatial plan from the city of The Hague. Several BIM models were integrated in the 3D spatial plan of this area. A workflow has been described that demonstrates the benefits of collaboration between the spatial domain and the AEC industry in 3D. The research results in a showcase with conclusions and considerations for both national and international practice.

  10. An Integrated Simplification Approach for 3D Buildings with Sloped and Flat Roofs

    Directory of Open Access Journals (Sweden)

    Jinghan Xie

    2016-07-01

    Full Text Available Simplification of three-dimensional (3D buildings is critical to improve the efficiency of visualizing urban environments while ensuring realistic urban scenes. Moreover, it underpins the construction of multi-scale 3D city models (3DCMs which could be applied to study various urban issues. In this paper, we design a generic yet effective approach for simplifying 3D buildings. Instead of relying on both semantic information and geometric information, our approach is based solely on geometric information as many 3D buildings still do not include semantic information. In addition, it provides an integrated means to treat 3D buildings with either sloped or flat roofs. The two case studies, one exploring simplification of individual 3D buildings at varying levels of complexity while the other, investigating the multi-scale simplification of a cityscape, show the effectiveness of our approach.

  11. Accuracy of 3D Imaging Software in Cephalometric Analysis

    Science.gov (United States)

    2013-06-21

    the accurate measurement of external apical root resorption (EARR). Lund, Grondahl and Grondahl (2010) reported that CBCT images of root length...H, Grondahl K, Grondahl HG. (2010). Cone beam computed tomography for assessment of root length and marginal bone level during orthodontic ... root angulation using panoramic and cone beam CT. Angle Orthodontics , 77(2), 206-213.   Periago DR, Scarge WC, Moshiri M, Scheetz JP, Silveira AM

  12. Imaging of human differentiated 3D neural aggregates using light sheet fluorescence microscopy

    Science.gov (United States)

    Gualda, Emilio J.; Simão, Daniel; Pinto, Catarina; Alves, Paula M.; Brito, Catarina

    2014-01-01

    The development of three dimensional (3D) cell cultures represents a big step for the better understanding of cell behavior and disease in a more natural like environment, providing not only single but multiple cell type interactions in a complex 3D matrix, highly resembling physiological conditions. Light sheet fluorescence microscopy (LSFM) is becoming an excellent tool for fast imaging of such 3D biological structures. We demonstrate the potential of this technique for the imaging of human differentiated 3D neural aggregates in fixed and live samples, namely calcium imaging and cell death processes, showing the power of imaging modality compared with traditional microscopy. The combination of light sheet microscopy and 3D neural cultures will open the door to more challenging experiments involving drug testing at large scale as well as a better understanding of relevant biological processes in a more realistic environment. PMID:25161607

  13. Remote laboratory for phase-aided 3D microscopic imaging and metrology

    Science.gov (United States)

    Wang, Meng; Yin, Yongkai; Liu, Zeyi; He, Wenqi; Li, Boqun; Peng, Xiang

    2014-05-01

    In this paper, the establishment of a remote laboratory for phase-aided 3D microscopic imaging and metrology is presented. Proposed remote laboratory consists of three major components, including the network-based infrastructure for remote control and data management, the identity verification scheme for user authentication and management, and the local experimental system for phase-aided 3D microscopic imaging and metrology. The virtual network computer (VNC) is introduced to remotely control the 3D microscopic imaging system. Data storage and management are handled through the open source project eSciDoc. Considering the security of remote laboratory, the fingerprint is used for authentication with an optical joint transform correlation (JTC) system. The phase-aided fringe projection 3D microscope (FP-3DM), which can be remotely controlled, is employed to achieve the 3D imaging and metrology of micro objects.

  14. 3D-Printed Disposable Wireless Sensors with Integrated Microelectronics for Large Area Environmental Monitoring

    KAUST Repository

    Farooqui, Muhammad Fahad; Karimi, Muhammad Akram; Salama, Khaled N.; Shamim, Atif

    2017-01-01

    disposable, compact, dispersible 3D-printed wireless sensor nodes with integrated microelectronics which can be dispersed in the environment and work in conjunction with few fixed nodes for large area monitoring applications. As a proof of concept

  15. Simulation study of a 3-D device integrating FinFET and UTBFET

    KAUST Repository

    Fahad, Hossain M.; Hu, Chenming; Hussain, Muhammad Mustafa

    2015-01-01

    By integrating 3-D nonplanar fins and 2-D ultrathin bodies, wavy FinFETs merge two formerly competing technologies on a silicon-on-insulator platform to deliver enhanced transistor performance compared with conventional trigate Fin

  16. Transient Thermal Analysis of 3-D Integrated Circuits Packages by the DGTD Method

    KAUST Repository

    Li, Ping; Dong, Yilin; Tang, Min; Mao, Junfa; Jiang, Li Jun; Bagci, Hakan

    2017-01-01

    Since accurate thermal analysis plays a critical role in the thermal design and management of the 3-D system-level integration, in this paper, a discontinuous Galerkin time-domain (DGTD) algorithm is proposed to achieve this purpose

  17. AN IMAGE-BASED TECHNIQUE FOR 3D BUILDING RECONSTRUCTION USING MULTI-VIEW UAV IMAGES

    Directory of Open Access Journals (Sweden)

    F. Alidoost

    2015-12-01

    Full Text Available Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  18. Detection of tibial condylar fractures using 3D imaging with a mobile image amplifier (Siemens ISO-C-3D): Comparison with plain films and spiral CT

    International Nuclear Information System (INIS)

    Kotsianos, D.; Rock, C.; Wirth, S.; Linsenmaier, U.; Brandl, R.; Fischer, T.; Pfeifer, K.J.; Reiser, M.; Euler, E.; Mutschler, W.

    2002-01-01

    Purpose: To analyze a prototype mobile C-arm 3D image amplifier in the detection and classification of experimental tibial condylar fractures with multiplanar reconstructions (MPR). Method: Human knee specimens (n=22) with tibial condylar fractures were examined with a prototype C-arm (ISO-C-3D, Siemens AG), plain films (CR) and spiral CT (CT). The motorized C-arm provides fluoroscopic images during a 190 orbital rotation computing a 119 mm data cube. From these 3D data sets MP reconstructions were obtained. All images were evaluated by four independent readers for the detection and assessment of fracture lines. All fractures were classified according to the Mueller AO classification. To confirm the results, the specimens were finally surgically dissected. Results: 97% of the tibial condylar fractures were easily seen and correctly classified according to the Mueller AO classification on MP reconstruction of the ISO-C-3D. There is no significant difference between ISO-C and CT in detection and correct classification of fractures, but ISO-CD-3D is significant by better than CR. (orig.) [de

  19. Multithreaded real-time 3D image processing software architecture and implementation

    Science.gov (United States)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  20. 3D-CT imaging processing for qualitative and quantitative analysis of maxillofacial cysts and tumors

    International Nuclear Information System (INIS)

    Cavalcanti, Marcelo de Gusmao Paraiso; Antunes, Jose Leopoldo Ferreira

    2002-01-01

    The objective of this study was to evaluate spiral-computed tomography (3D-CT) images of 20 patients presenting with cysts and tumors in the maxillofacial complex, in order to compare the surface and volume techniques of image rendering. The qualitative and quantitative appraisal indicated that the volume technique allowed a more precise and accurate observation than the surface method. On the average, the measurements obtained by means of the 3D volume-rendering technique were 6.28% higher than those obtained by means of the surface method. The sensitivity of the 3D surface technique was lower than that of the 3D volume technique for all conditions stipulated in the diagnosis and evaluation of lesions. We concluded that the 3D-CT volume rendering technique was more reproducible and sensitive than the 3D-CT surface method, in the diagnosis, treatment planning and evaluation of maxillofacial lesions, especially those with intra-osseous involvement. (author)

  1. 3D-CT imaging processing for qualitative and quantitative analysis of maxillofacial cysts and tumors

    Energy Technology Data Exchange (ETDEWEB)

    Cavalcanti, Marcelo de Gusmao Paraiso [Sao Paulo Univ., SP (Brazil). Faculdade de Odontologia. Dept. de Radiologia; Antunes, Jose Leopoldo Ferreira [Sao Paulo Univ., SP (Brazil). Faculdade de Odotologia. Dept. de Odontologia Social

    2002-09-01

    The objective of this study was to evaluate spiral-computed tomography (3D-CT) images of 20 patients presenting with cysts and tumors in the maxillofacial complex, in order to compare the surface and volume techniques of image rendering. The qualitative and quantitative appraisal indicated that the volume technique allowed a more precise and accurate observation than the surface method. On the average, the measurements obtained by means of the 3D volume-rendering technique were 6.28% higher than those obtained by means of the surface method. The sensitivity of the 3D surface technique was lower than that of the 3D volume technique for all conditions stipulated in the diagnosis and evaluation of lesions. We concluded that the 3D-CT volume rendering technique was more reproducible and sensitive than the 3D-CT surface method, in the diagnosis, treatment planning and evaluation of maxillofacial lesions, especially those with intra-osseous involvement. (author)

  2. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    Science.gov (United States)

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  3. Anesthesiology training using 3D imaging and virtual reality

    Science.gov (United States)

    Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

    1996-04-01

    Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

  4. Value of 3-dimensional (3D) imaging in rheumatology

    International Nuclear Information System (INIS)

    Fredy, D.

    1990-01-01

    The whole body scanner (Exel 2.400) of the Centre Hospitalier Sainte-Anne enables the three-dimensional reconstruction, with visualization, of the object in its real volume in less than 10 minutes after taking 20 to 40 radiological sections. The exploration can be complete at all levels. Bone lesions can be perfectly shown, the study of osteoarticular or intraspinal abnormalities is facilitated, all solution of continuity can be detected. A soft parts program as well as a colour program enable a clear and rapid visualization of organic lesions. Three-dimensional imaging can be of great value in rheumatology [fr

  5. 3D Tomographic Image Reconstruction using CUDA C

    International Nuclear Information System (INIS)

    Dominguez, J. S.; Assis, J. T.; Oliveira, L. F. de

    2011-01-01

    This paper presents the study and implementation of a software for three dimensional reconstruction of images obtained with a tomographic system using the capabilities of Graphic Processing Units(GPU). The reconstruction by filtered back-projection method was developed using the CUDA C, for maximum utilization of the processing capabilities of GPUs to solve computational problems with large computational cost and highly parallelizable. It was discussed the potential of GPUs and shown its advantages to solving this kind of problems. The results in terms of runtime will be compared with non-parallelized implementations and must show a great reduction of processing time. (Author)

  6. A multi-frequency electrical impedance tomography system for real-time 2D and 3D imaging

    Science.gov (United States)

    Yang, Yunjie; Jia, Jiabin

    2017-08-01

    This paper presents the design and evaluation of a configurable, fast multi-frequency Electrical Impedance Tomography (mfEIT) system for real-time 2D and 3D imaging, particularly for biomedical imaging. The system integrates 32 electrode interfaces and the current frequency ranges from 10 kHz to 1 MHz. The system incorporates the following novel features. First, a fully adjustable multi-frequency current source with current monitoring function is designed. Second, a flexible switching scheme is developed for arbitrary sensing configuration and a semi-parallel data acquisition architecture is implemented for high-frame-rate data acquisition. Furthermore, multi-frequency digital quadrature demodulation is accomplished in a high-capacity Field Programmable Gate Array. At last, a 3D imaging software, visual tomography, is developed for real-time 2D and 3D image reconstruction, data analysis, and visualization. The mfEIT system is systematically tested and evaluated from the aspects of signal to noise ratio (SNR), frame rate, and 2D and 3D multi-frequency phantom imaging. The highest SNR is 82.82 dB on a 16-electrode sensor. The frame rate is up to 546 fps at serial mode and 1014 fps at semi-parallel mode. The evaluation results indicate that the presented mfEIT system is a powerful tool for real-time 2D and 3D imaging.

  7. Synthetic Microwave Imaging Reflectometry diagnostic using 3D FDTD Simulations

    Science.gov (United States)

    Kruger, Scott; Jenkins, Thomas; Smithe, David; King, Jacob; Nimrod Team Team

    2017-10-01

    Microwave Imaging Reflectometry (MIR) has become a standard diagnostic for understanding tokamak edge perturbations, including the edge harmonic oscillations in QH mode operation. These long-wavelength perturbations are larger than the normal turbulent fluctuation levels and thus normal analysis of synthetic signals become more difficult. To investigate, we construct a synthetic MIR diagnostic for exploring density fluctuation amplitudes in the tokamak plasma edge by using the three-dimensional, full-wave FDTD code Vorpal. The source microwave beam for the diagnostic is generated and refelected at the cutoff surface that is distorted by 2D density fluctuations in the edge plasma. Synthetic imaging optics at the detector can be used to understand the fluctuation and background density profiles. We apply the diagnostic to understand the fluctuations in edge plasma density during QH-mode activity in the DIII-D tokamak, as modeled by the NIMROD code. This work was funded under DOE Grant Number DE-FC02-08ER54972.

  8. Parallel 3-D image processing for nuclear emulsion

    International Nuclear Information System (INIS)

    Nakano, Toshiyuki

    2001-01-01

    The history of nuclear plate was explained. The first nuclear plate was named as pellicles covered with 600 μm of emulsion in Europe. In Japan Emulsion Cloud Chamber (ECC) using thin emulsion (50 μm) type nuclear plate was developed in 1960. Then, the semi-automatic analyzer (1971) and automatic analyzer (1980), Track Selector (TS) with memory stored 16 layer images in 512 x 512 x 16 pixel were developed. Moreover, NTS (New Track Selector), speeding up analyzer, was produced for analysis of results of CHORUS experiment in 1996. Simultaneous readout of 16 layer images had been carried out, but UTS (Ultra Track Selector) made possible to progressive treatment of 16 layers of some data and determination of traces in all angles. Direct detection of tau neutrino (VT) was studied by DONUT (FNAL E872) using UTS and nuclear plate. Neutrino beam was produced by 800 GeV proton beam hitting the fixed target. About 1100 phenomena of neutrino reactions were observed during six months of irradiation. 203 phenomena were detected. 4 examples were shown in this paper. OPERA experiment by SK is explained. (S.Y.)

  9. Towards an Integrated Visualization Of Semantically Enriched 3D City Models: An Ontology of 3D Visualization Techniques

    OpenAIRE

    Métral, Claudine; Ghoula, Nizar; Falquet, Gilles

    2012-01-01

    3D city models - which represent in 3 dimensions the geometric elements of a city - are increasingly used for an intended wide range of applications. Such uses are made possible by using semantically enriched 3D city models and by presenting such enriched 3D city models in a way that allows decision-making processes to be carried out from the best choices among sets of objectives, and across issues and scales. In order to help in such a decision-making process we have defined a framework to f...

  10. 3D seismic imaging of the subsurface for underground construction and drilling

    International Nuclear Information System (INIS)

    Juhlin, Christopher

    2014-01-01

    3D seismic imaging of underground structure has been carried out in various parts of the world for various purposes. Examples shown below were introduced in the presentation. - CO 2 storage in Ketzin, Germany; - Mine planning at the Millennium Uranium Deposit in Canada; - Planned Forsmark spent nuclear fuel repository in Sweden; - Exploring the Scandinavian Mountain Belt by Deep Drilling: the COSC drilling project in Sweden. The author explained that seismic methods provide the highest resolution images (5-10 m) of deeper (1-5 km) sub-surfaces in the sedimentary environment, but further improvement is required in crystalline rock environments, and the integration of geology, geophysics, and drilling will provide an optimal interpretation. (author)

  11. Dielectric Spectroscopic Detection of Early Failures in 3-D Integrated Circuits.

    Science.gov (United States)

    Obeng, Yaw; Okoro, C A; Ahn, Jung-Joon; You, Lin; Kopanski, Joseph J

    The commercial introduction of three dimensional integrated circuits (3D-ICs) has been hindered by reliability challenges, such as stress related failures, resistivity changes, and unexplained early failures. In this paper, we discuss a new RF-based metrology, based on dielectric spectroscopy, for detecting and characterizing electrically active defects in fully integrated 3D devices. These defects are traceable to the chemistry of the insolation dielectrics used in the through silicon via (TSV) construction. We show that these defects may be responsible for some of the unexplained early reliability failures observed in TSV enabled 3D devices.

  12. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  13. DEM GENERATION FROM HIGH RESOLUTION SATELLITE IMAGES THROUGH A NEW 3D LEAST SQUARES MATCHING ALGORITHM

    Directory of Open Access Journals (Sweden)

    T. Kim

    2012-09-01

    Full Text Available Automated generation of digital elevation models (DEMs from high resolution satellite images (HRSIs has been an active research topic for many years. However, stereo matching of HRSIs, in particular based on image-space search, is still difficult due to occlusions and building facades within them. Object-space matching schemes, proposed to overcome these problem, often are very time consuming and critical to the dimensions of voxels. In this paper, we tried a new least square matching (LSM algorithm that works in a 3D object space. The algorithm starts with an initial height value on one location of the object space. From this 3D point, the left and right image points are projected. The true height is calculated by iterative least squares estimation based on the grey level differences between the left and right patches centred on the projected left and right points. We tested the 3D LSM to the Worldview images over 'Terrassa Sud' provided by the ISPRS WG I/4. We also compared the performance of the 3D LSM with the correlation matching based on 2D image space and the correlation matching based on 3D object space. The accuracy of the DEM from each method was analysed against the ground truth. Test results showed that 3D LSM offers more accurate DEMs over the conventional matching algorithms. Results also showed that 3D LSM is sensitive to the accuracy of initial height value to start the estimation. We combined the 3D COM and 3D LSM for accurate and robust DEM generation from HRSIs. The major contribution of this paper is that we proposed and validated that LSM can be applied to object space and that the combination of 3D correlation and 3D LSM can be a good solution for automated DEM generation from HRSIs.

  14. Potential Cost Savings for Use of 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization

    Science.gov (United States)

    2014-04-30

    bäÉîÉåíÜ=^ååì~ä=^Åèìáëáíáçå= oÉëÉ~êÅÜ=póãéçëáìã= qÜìêëÇ~ó=pÉëëáçåë= sçäìãÉ=ff= = Potential Cost Savings for Use of 3D Printing Combined With 3D...TYPE 3. DATES COVERED 00-00-2014 to 00-00-2014 4. TITLE AND SUBTITLE Potential Cost Savings for Use of 3D Printing Combined With 3D Imaging and...Chair: RADM David Lewis, USN Program Executive Officer, SHIPS Potential Cost Savings for Use of 3D Printing Combined With 3D Imaging and CPLM for

  15. The digital bee brain: integrating and managing neurons in a common 3D reference system

    Directory of Open Access Journals (Sweden)

    Jürgen Rybak

    2010-07-01

    Full Text Available The honeybee standard brain (HSB serves as an interactive tool for relating morphologies of bee brain neurons and provides a reference system for functional and bibliographical properties (http://www.neurobiologie.fu-berlin.de/beebrain/. The ultimate goal is to document not only the morphological network properties of neurons collected from separate brains, but also to establish a graphical user interface for a neuron-related data base. Here, we review the current methods and protocols used to incorporate neuronal reconstructions into the HSB. Our registration protocol consists of two separate steps applied to imaging data from two-channel confocal microscopy scans: (1 The reconstruction of the neuron, facilitated by an automatic extraction of the neuron’s skeleton based on threshold segmentation, and (2 the semi-automatic 3D segmentation of the neuropils and their registration with the HSB. The integration of neurons in the HSB is performed by applying the transformation computed in step (2 to the reconstructed neurons of step (1. The most critical issue of this protocol in terms of user interaction time – the segmentation process – is drastically improved by the use of a model-based segmentation process. Furthermore, the underlying statistical shape models (SSM allow the visualization and analysis of characteristic variations in large sets of bee brain data. The anatomy of neural networks composed of multiple neurons that are registered into the HSB are visualized by depicting the 3D reconstructions together with semantic information with the objective to integrate data from multiple sources (electrophysiology, imaging, immunocytochemistry, molecular biology. Ultimately, this will allow the user to specify cell types and retrieve their morphologies along with physiological characterizations.

  16. Triangular SPECT system for 3-D total organ volume imaging: Design concept and preliminary imaging results

    International Nuclear Information System (INIS)

    Lim, C.B.; Anderson, J.; Covic, J.

    1985-01-01

    SPECT systems based on 2-D detectors for projection data collection and filtered back-projection image reconstruction have the potential for true 3-D imaging, providing contiguous slice images in any orientation. Anger camera-based SPECT systems have the natural advantage supporting planar imaging clinical procedures. However, current systems suffer from two drawbacks; poor utilization of emitted photons, and inadequate system design for SPECT. A SPECT system consisting of three rectangular cameras with radial translation would offer the variable cylindrical FOV of 25 cm to 40 cm diameter allowing close detector access to the object. This system would provide optimized imaging for both brain and body organs in terms of sensitivity and resolution. For brain imaging a tight detector triangle with fan beam collimation, matching detector UFOV to the head, allows full 2 π utilization of emitted photons, resulting in >4 times sensitivity increase over the single detector system. Minification of intrinsic detector resolution in fan beam collimation further improves system resolution. For body organ imaging the three detectors with parallel hole collimators, rotating in non-circular orbit, provide both improved resolution and three-fold sensitivity increase. Practical challenges lie in ensuring perfect image overlap from three detectors without resolution degradation and artifact generation in order to benefit from the above improvements. An experimental system has been developed to test the above imaging concept and we have successfully demonstrated the superior image quality of the overlapped images. Design concept will be presented with preliminary imaging results

  17. Intensity-based segmentation and visualization of cells in 3D microscopic images using the GPU

    Science.gov (United States)

    Kang, Mi-Sun; Lee, Jeong-Eom; Jeon, Woong-ki; Choi, Heung-Kook; Kim, Myoung-Hee

    2013-02-01

    3D microscopy images contain abundant astronomical data, rendering 3D microscopy image processing time-consuming and laborious on a central processing unit (CPU). To solve these problems, many people crop a region of interest (ROI) of the input image to a small size. Although this reduces cost and time, there are drawbacks at the image processing level, e.g., the selected ROI strongly depends on the user and there is a loss in original image information. To mitigate these problems, we developed a 3D microscopy image processing tool on a graphics processing unit (GPU). Our tool provides efficient and various automatic thresholding methods to achieve intensity-based segmentation of 3D microscopy images. Users can select the algorithm to be applied. Further, the image processing tool provides visualization of segmented volume data and can set the scale, transportation, etc. using a keyboard and mouse. However, the 3D objects visualized fast still need to be analyzed to obtain information for biologists. To analyze 3D microscopic images, we need quantitative data of the images. Therefore, we label the segmented 3D objects within all 3D microscopic images and obtain quantitative information on each labeled object. This information can use the classification feature. A user can select the object to be analyzed. Our tool allows the selected object to be displayed on a new window, and hence, more details of the object can be observed. Finally, we validate the effectiveness of our tool by comparing the CPU and GPU processing times by matching the specification and configuration.

  18. Imaging multipole gravity anomaly sources by 3D probability tomography

    International Nuclear Information System (INIS)

    Alaia, Raffaele; Patella, Domenico; Mauriello, Paolo

    2009-01-01

    We present a generalized theory of the probability tomography applied to the gravity method, assuming that any Bouguer anomaly data set can be caused by a discrete number of monopoles, dipoles, quadrupoles and octopoles. These elementary sources are used to characterize, in an as detailed as possible way and without any a priori assumption, the shape and position of the most probable minimum structure of the gravity sources compatible with the observed data set, by picking out the location of their centres and peculiar points of their boundaries related to faces, edges and vertices. A few synthetic examples using simple geometries are discussed in order to demonstrate the notably enhanced resolution power of the new approach, compared with a previous formulation that used only monopoles and dipoles. A field example related to a gravity survey carried out in the volcanic area of Mount Etna (Sicily, Italy) is presented, aimed at imaging the geometry of the minimum gravity structure down to 8 km of depth bsl

  19. Algorithm of pulmonary emphysema extraction using low dose thoracic 3D CT images

    Science.gov (United States)

    Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.

    2006-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  20. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    Science.gov (United States)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  1. Soft tissue segmentation and 3D display from computerized tomography and magnetic resonance imaging

    International Nuclear Information System (INIS)

    Fan, R.T.; Trivedi, S.S.; Fellingham, L.L.; Gamboa-Aldeco, A.; Hedgcock, M.W.

    1987-01-01

    Volume calculation and 3D display of human anatomy facilitate a physician's diagnosis, treatment, and evaluation. Accurate segmentation of soft tissue structures is a prerequisite for such volume calculations and 3D displays, but segmentation by hand-outlining structures is often tedious and time-consuming. In this paper, methods based on analysis of statistics of image gray level are applied to segmentation of soft tissue in medical images, with the goal of making segmentation automatic or semi-automatic. The resulting segmented images, volume calculations, and 3D displays are analyzed and compared with results based on physician-drawn outlines as well as actual volume measurements

  2. A simple device for the stereoscopic display of 3D CT images

    International Nuclear Information System (INIS)

    Haveri, M.; Suramo, I.; Laehde, S.; Karhula, V.; Junila, J.

    1997-01-01

    We describe a simple device for creating true 3D views of image pairs obtained at 3D CT reconstruction. The device presents the images in a slightly different angle of view for the left and the right eyes. This true 3D viewing technique was applied experimentally in the evaluation of complex acetabular fractures. Experiments were also made to determine the optimal angle between the images for each eye. The angle varied between 1 and 7 for different observers and also depended on the display field of view used. (orig.)

  3. Embedded, real-time UAV control for improved, image-based 3D scene reconstruction

    Science.gov (United States)

    Jean Liénard; Andre Vogs; Demetrios Gatziolis; Nikolay Strigul

    2016-01-01

    Unmanned Aerial Vehicles (UAVs) are already broadly employed for 3D modeling of large objects such as trees and monuments via photogrammetry. The usual workflow includes two distinct steps: image acquisition with UAV and computationally demanding postflight image processing. Insufficient feature overlaps across images is a common shortcoming in post-flight image...

  4. Status and perspectives of pixel sensors based on 3D vertical integration

    CERN Document Server

    Re, V

    2014-01-01

    This paper reviews the most recent developments of 3D integration in the field of silicon pixel sensors and readout integrated circuits. This technology may address the needs of future high energy physics and photon science experiments by increasing the electronic functional density in small pixel readout cells and by stacking various device layers based on different technologies, each optimized for a different function. Current efforts are aimed at improving the performance of both hybrid pixel detectors and of CMOS sensors. The status of these activities is discussed here, taking into account experimental results on 3D devices developed in the frame of the 3D-IC consortium. The paper also provides an overview of the ideas that are being currently devised for novel 3D vertically integrated pixel sensors.

  5. Mutual information as a measure of image quality for 3D dynamic lung imaging with EIT.

    Science.gov (United States)

    Crabb, M G; Davidson, J L; Little, R; Wright, P; Morgan, A R; Miller, C A; Naish, J H; Parker, G J M; Kikinis, R; McCann, H; Lionheart, W R B

    2014-05-01

    We report on a pilot study of dynamic lung electrical impedance tomography (EIT) at the University of Manchester. Low-noise EIT data at 100 frames per second were obtained from healthy male subjects during controlled breathing, followed by magnetic resonance imaging (MRI) subsequently used for spatial validation of the EIT reconstruction. The torso surface in the MR image and electrode positions obtained using MRI fiducial markers informed the construction of a 3D finite element model extruded along the caudal-distal axis of the subject. Small changes in the boundary that occur during respiration were accounted for by incorporating the sensitivity with respect to boundary shape into a robust temporal difference reconstruction algorithm. EIT and MRI images were co-registered using the open source medical imaging software, 3D Slicer. A quantitative comparison of quality of different EIT reconstructions was achieved through calculation of the mutual information with a lung-segmented MR image. EIT reconstructions using a linear shape correction algorithm reduced boundary image artefacts, yielding better contrast of the lungs, and had 10% greater mutual information compared with a standard linear EIT reconstruction.

  6. MR Imaging of the Internal Auditory Canal and Inner Ear at 3T: Comparison between 3D Driven Equilibrium and 3D Balanced Fast Field Echo Sequences

    Energy Technology Data Exchange (ETDEWEB)

    Byun, Jun Soo; Kim, Hyung Jin; Yim, Yoo Jeong; Kim, Sung Tae; Jeon, Pyoung; Kim, Keon Ha [Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of); Kim, Sam Soo; Jeon, Yong Hwan; Lee, Ji Won [Kangwon National University College of Medicine, Chuncheon (Korea, Republic of)

    2008-06-15

    To compare the use of 3D driven equilibrium (DRIVE) imaging with 3D balanced fast field echo (bFFE) imaging in the assessment of the anatomic structures of the internal auditory canal (IAC) and inner ear at 3 Tesla (T). Thirty ears of 15 subjects (7 men and 8 women; age range, 22 71 years; average age, 50 years) without evidence of ear problems were examined on a whole-body 3T MR scanner with both 3D DRIVE and 3D bFFE sequences by using an 8-channel sensitivity encoding (SENSE) head coil. Two neuroradiologists reviewed both MR images with particular attention to the visibility of the anatomic structures, including four branches of the cranial nerves within the IAC, anatomic structures of the cochlea, vestibule, and three semicircular canals. Although both techniques provided images of relatively good quality, the 3D DRIVE sequence was somewhat superior to the 3D bFFE sequence. The discrepancies were more prominent for the basal turn of the cochlea, vestibule, and all semicircular canals, and were thought to be attributed to the presence of greater magnetic susceptibility artifacts inherent to gradient-echo techniques such as bFFE. Because of higher image quality and less susceptibility artifacts, we highly recommend the employment of 3D DRIVE imaging as the MR imaging choice for the IAC and inner ear

  7. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    CERN Document Server

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  8. A framework for human spine imaging using a freehand 3D ultrasound system

    NARCIS (Netherlands)

    Purnama, Ketut E.; Wilkinson, Michael H.F.; Veldhuizen, Albert G.; van Ooijen, Peter M.A.; Lubbers, Jaap; Burgerhof, Johannes G.M.; Sardjono, Tri A.; Verkerke, Gijsbertus Jacob

    2010-01-01

    The use of 3D ultrasound imaging to follow the progression of scoliosis, i.e., a 3D deformation of the spine, is described. Unlike other current examination modalities, in particular based on X-ray, its non-detrimental effect enables it to be used frequently to follow the progression of scoliosis

  9. Contributions in compression of 3D medical images and 2D images

    International Nuclear Information System (INIS)

    Gaudeau, Y.

    2006-12-01

    The huge amounts of volumetric data generated by current medical imaging techniques in the context of an increasing demand for long term archiving solutions, as well as the rapid development of distant radiology make the use of compression inevitable. Indeed, if the medical community has sided until now with compression without losses, most of applications suffer from compression ratios which are too low with this kind of compression. In this context, compression with acceptable losses could be the most appropriate answer. So, we propose a new loss coding scheme based on 3D (3 dimensional) Wavelet Transform and Dead Zone Lattice Vector Quantization 3D (DZLVQ) for medical images. Our algorithm has been evaluated on several computerized tomography (CT) and magnetic resonance image volumes. The main contribution of this work is the design of a multidimensional dead zone which enables to take into account correlations between neighbouring elementary volumes. At high compression ratios, we show that it can out-perform visually and numerically the best existing methods. These promising results are confirmed on head CT by two medical patricians. The second contribution of this document assesses the effect with-loss image compression on CAD (Computer-Aided Decision) detection performance of solid lung nodules. This work on 120 significant lungs images shows that detection did not suffer until 48:1 compression and still was robust at 96:1. The last contribution consists in the complexity reduction of our compression scheme. The first allocation dedicated to 2D DZLVQ uses an exponential of the rate-distortion (R-D) functions. The second allocation for 2D and 3D medical images is based on block statistical model to estimate the R-D curves. These R-D models are based on the joint distribution of wavelet vectors using a multidimensional mixture of generalized Gaussian (MMGG) densities. (author)

  10. Spatiotemporal Segmentation and Modeling of the Mitral Valve in Real-Time 3D Echocardiographic Images.

    Science.gov (United States)

    Pouch, Alison M; Aly, Ahmed H; Lai, Eric K; Yushkevich, Natalie; Stoffers, Rutger H; Gorman, Joseph H; Cheung, Albert T; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2017-09-01

    Transesophageal echocardiography is the primary imaging modality for preoperative assessment of mitral valves with ischemic mitral regurgitation (IMR). While there are well known echocardiographic insights into the 3D morphology of mitral valves with IMR, such as annular dilation and leaflet tethering, less is understood about how quantification of valve dynamics can inform surgical treatment of IMR or predict short-term recurrence of the disease. As a step towards filling this knowledge gap, we present a novel framework for 4D segmentation and geometric modeling of the mitral valve in real-time 3D echocardiography (rt-3DE). The framework integrates multi-atlas label fusion and template-based medial modeling to generate quantitatively descriptive models of valve dynamics. The novelty of this work is that temporal consistency in the rt-3DE segmentations is enforced during both the segmentation and modeling stages with the use of groupwise label fusion and Kalman filtering. The algorithm is evaluated on rt-3DE data series from 10 patients: five with normal mitral valve morphology and five with severe IMR. In these 10 data series that total 207 individual 3DE images, each 3DE segmentation is validated against manual tracing and temporal consistency between segmentations is demonstrated. The ultimate goal is to generate accurate and consistent representations of valve dynamics that can both visually and quantitatively provide insight into normal and pathological valve function.

  11. The aperture synthesis imaging capability of the EISCAT_3D radars

    Science.gov (United States)

    La Hoz, Cesar; Belyey, Vasyl

    2010-05-01

    The built-in Aperture Synthesis Imaging Radar (ASIR) capabilities of the EISCAT_3D system, complemented with multiple beams and rapid beam scanning, is what will make the new radar truly three dimensional and justify its name. With the EISCAT_3D radars it will be possible to make investigations in 3-dimensions of several important phenomena such as Natural Enhanced Ion Acoustic Lines (NEIALs), Polar Mesospheric Summer and Winter Echoes (PMSE and PMWE), meteors, space debris, atmospheric waves and turbulence in the mesosphere, upper troposphere and possibly the lower stratosphere. Of particular interest and novelty is the measurement of the structure in electron density created by aurora that produce incoherent scatter. With scale sizes of the order of tens of meters, the imaging of these structures will be conditioned only by the signal to noise ratio which is expected to be high during some of these events, since the electron density can be significantly enhanced. The electron density inhomogeneities and plasma structures excited by artificial ionospheric heating could conceivable be resolved by the radars provided that their variation during the integration time is not great.

  12. Creation of Cardiac Tissue Exhibiting Mechanical Integration of Spheroids Using 3D Bioprinting.

    Science.gov (United States)

    Ong, Chin Siang; Fukunishi, Takuma; Nashed, Andrew; Blazeski, Adriana; Zhang, Huaitao; Hardy, Samantha; DiSilvestre, Deborah; Vricella, Luca; Conte, John; Tung, Leslie; Tomaselli, Gordon; Hibino, Narutoshi

    2017-07-02

    This protocol describes 3D bioprinting of cardiac tissue without the use of biomaterials, using only cells. Cardiomyocytes, endothelial cells and fibroblasts are first isolated, counted and mixed at desired cell ratios. They are co-cultured in individual wells in ultra-low attachment 96-well plates. Within 3 days, beating spheroids form. These spheroids are then picked up by a nozzle using vacuum suction and assembled on a needle array using a 3D bioprinter. The spheroids are then allowed to fuse on the needle array. Three days after 3D bioprinting, the spheroids are removed as an intact patch, which is already spontaneously beating. 3D bioprinted cardiac patches exhibit mechanical integration of component spheroids and are highly promising in cardiac tissue regeneration and as 3D models of heart disease.

  13. Depiction of the cranial nerves around the cavernous sinus by 3D reversed FISP with diffusion weighted imaging (3D PSIF-DWI)

    International Nuclear Information System (INIS)

    Ishida, Go; Oishi, Makoto; Jinguji, Shinya; Yoneoka, Yuichiro; Fujii, Yukihiko; Sato, Mitsuya

    2011-01-01

    The purpose of this study was to evaluate the anatomy of cranial nerves running in and around the cavernous sinus, we employed three-dimensional reversed fast imaging with steady-state precession (FISP) with diffusion weighted imaging (3D PSIF-DWI) on 3-T magnetic resonance (MR) system. After determining the proper parameters to obtain sufficient resolution of 3D PSIF-DWI, we collected imaging data of 20-side cavernous regions in 10 normal subjects. 3D PSIF-DWI provided high contrast between the cranial nerves and other soft tissues, fluid, and blood in all subjects. We also created volume-rendered images of 3D PSIF-DWI and anatomically evaluated the reliability of visualizing optic, oculomotor, trochlear, trigeminal, and abducens nerves on 3D PSIF-DWI. All 20 sets of cranial nerves were visualized and 12 trochlear nerves and 6 abducens nerves were partially identified. We also presented preliminary clinical experiences in two cases with pituitary adenomas. The anatomical relationship between the tumor and cranial nerves running in and around the cavernous sinus could be three-dimensionally comprehended by 3D PSIF-DWI and the volume-rendered images. In conclusion, 3D PSIF-DWI has great potential to provide high resolution 'cranial nerve imaging', which visualizes the whole length of the cranial nerves including the parts in the blood flow as in the cavernous sinus region. (author)

  14. [Depiction of the cranial nerves around the cavernous sinus by 3D reversed FISP with diffusion weighted imaging (3D PSIF-DWI)].

    Science.gov (United States)

    Ishida, Go; Oishi, Makoto; Jinguji, Shinya; Yoneoka, Yuichiro; Sato, Mitsuya; Fujii, Yukihiko

    2011-10-01

    To evaluate the anatomy of cranial nerves running in and around the cavernous sinus, we employed three-dimensional reversed fast imaging with steady-state precession (FISP) with diffusion weighted imaging (3D PSIF-DWI) on 3-T magnetic resonance (MR) system. After determining the proper parameters to obtain sufficient resolution of 3D PSIF-DWI, we collected imaging data of 20-side cavernous regions in 10 normal subjects. 3D PSIF-DWI provided high contrast between the cranial nerves and other soft tissues, fluid, and blood in all subjects. We also created volume-rendered images of 3D PSIF-DWI and anatomically evaluated the reliability of visualizing optic, oculomotor, trochlear, trigeminal, and abducens nerves on 3D PSIF-DWI. All 20 sets of cranial nerves were visualized and 12 trochlear nerves and 6 abducens nerves were partially identified. We also presented preliminary clinical experiences in two cases with pituitary adenomas. The anatomical relationship between the tumor and cranial nerves running in and around the cavernous sinus could be three-dimensionally comprehended by 3D PSIF-DWI and the volume-rendered images. In conclusion, 3D PSIF-DWI has great potential to provide high resolution "cranial nerve imaging", which visualizes the whole length of the cranial nerves including the parts in the blood flow as in the cavernous sinus region.

  15. 3D printing of intracranial artery stenosis based on the source images of magnetic resonance angiograph.

    Science.gov (United States)

    Xu, Wei-Hai; Liu, Jia; Li, Ming-Li; Sun, Zhao-Yong; Chen, Jie; Wu, Jian-Huang

    2014-08-01

    Three dimensional (3D) printing techniques for brain diseases have not been widely studied. We attempted to 'print' the segments of intracranial arteries based on magnetic resonance imaging. Three dimensional magnetic resonance angiography (MRA) was performed on two patients with middle cerebral artery (MCA) stenosis. Using scale-adaptive vascular modeling, 3D vascular models were constructed from the MRA source images. The magnified (ten times) regions of interest (ROI) of the stenotic segments were selected and fabricated by a 3D printer with a resolution of 30 µm. A survey to 8 clinicians was performed to evaluate the accuracy of 3D printing results as compared with MRA findings (4 grades, grade 1: consistent with MRA and provide additional visual information; grade 2: consistent with MRA; grade 3: not consistent with MRA; grade 4: not consistent with MRA and provide probable misleading information). If a 3D printing vessel segment was ideally matched to the MRA findings (grade 2 or 1), a successful 3D printing was defined. Seven responders marked "grade 1" to 3D printing results, while one marked "grade 4". Therefore, 87.5% of the clinicians considered the 3D printing were successful. Our pilot study confirms the feasibility of using 3D printing technique in the research field of intracranial artery diseases. Further investigations are warranted to optimize this technique and translate it into clinical practice.

  16. D3D augmented reality imaging system: proof of concept in mammography

    Directory of Open Access Journals (Sweden)

    Douglas DB

    2016-08-01

    Full Text Available David B Douglas,1 Emanuel F Petricoin,2 Lance Liotta,2 Eugene Wilson3 1Department of Radiology, Stanford University, Palo Alto, CA, 2Center for Applied Proteomics and Molecular Medicine, George Mason University, Manassas, VA, 3Department of Radiology, Fort Benning, Columbus, GA, USA Purpose: The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D augmented reality”. Materials and methods: A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results: The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion: The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. Keywords: augmented reality, 3D medical imaging, radiology, depth perception

  17. 3D Convolutional Neural Networks for Crop Classification with Multi-Temporal Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Shunping Ji

    2018-01-01

    Full Text Available This study describes a novel three-dimensional (3D convolutional neural networks (CNN based method that automatically classifies crops from spatio-temporal remote sensing images. First, 3D kernel is designed according to the structure of multi-spectral multi-temporal remote sensing data. Secondly, the 3D CNN framework with fine-tuned parameters is designed for training 3D crop samples and learning spatio-temporal discriminative representations, with the full crop growth cycles being preserved. In addition, we introduce an active learning strategy to the CNN model to improve labelling accuracy up to a required threshold with the most efficiency. Finally, experiments are carried out to test the advantage of the 3D CNN, in comparison to the two-dimensional (2D CNN and other conventional methods. Our experiments show that the 3D CNN is especially suitable in characterizing the dynamics of crop growth and outperformed the other mainstream methods.

  18. Integration of Jeddah Historical BIM and 3D GIS for Documentation and Restoration of Historical Monument

    Directory of Open Access Journals (Sweden)

    A. Baik

    2015-08-01

    Full Text Available This work outlines a new approach for the integration of 3D Building Information Modelling and the 3D Geographic Information System (GIS to provide semantically rich models, and to get the benefits from both systems to help document and analyse cultural heritage sites. Our proposed framework is based on the Jeddah Historical Building Information Modelling process (JHBIM. This JHBIM consists of a Hijazi Architectural Objects Library (HAOL that supports higher level of details (LoD while decreasing the time of modelling. The Hijazi Architectural Objects Library has been modelled based on the Islamic historical manuscripts and Hijazi architectural pattern books. Moreover, the HAOL is implemented using BIM software called Autodesk Revit. However, it is known that this BIM environment still has some limitations with the non-standard architectural objects. Hence, we propose to integrate the developed 3D JHBIM with 3D GIS for more advanced analysis. To do so, the JHBIM database is exported and semantically enriched with non-architectural information that is necessary for restoration and preservation of historical monuments. After that, this database is integrated with the 3D Model in the 3D GIS solution. At the end of this paper, we’ll illustrate our proposed framework by applying it to a Historical Building called Nasif Historical House in Jeddah. First of all, this building is scanned by the use of a Terrestrial Laser Scanner (TLS and Close Range Photogrammetry. Then, the 3D JHBIM based on the HOAL is designed on Revit Platform. Finally, this model is integrated to a 3D GIS solution through Autodesk InfraWorks. The shown analysis presented in this research highlights the importance of such integration especially for operational decisions and sharing the historical knowledge about Jeddah Historical City. Furthermore, one of the historical buildings in Old Jeddah, Nasif Historical House, was chosen as a test case for the project.

  19. Integration of Jeddah Historical BIM and 3D GIS for Documentation and Restoration of Historical Monument

    Science.gov (United States)

    Baik, A.; Yaagoubi, R.; Boehm, J.

    2015-08-01

    This work outlines a new approach for the integration of 3D Building Information Modelling and the 3D Geographic Information System (GIS) to provide semantically rich models, and to get the benefits from both systems to help document and analyse cultural heritage sites. Our proposed framework is based on the Jeddah Historical Building Information Modelling process (JHBIM). This JHBIM consists of a Hijazi Architectural Objects Library (HAOL) that supports higher level of details (LoD) while decreasing the time of modelling. The Hijazi Architectural Objects Library has been modelled based on the Islamic historical manuscripts and Hijazi architectural pattern books. Moreover, the HAOL is implemented using BIM software called Autodesk Revit. However, it is known that this BIM environment still has some limitations with the non-standard architectural objects. Hence, we propose to integrate the developed 3D JHBIM with 3D GIS for more advanced analysis. To do so, the JHBIM database is exported and semantically enriched with non-architectural information that is necessary for restoration and preservation of historical monuments. After that, this database is integrated with the 3D Model in the 3D GIS solution. At the end of this paper, we'll illustrate our proposed framework by applying it to a Historical Building called Nasif Historical House in Jeddah. First of all, this building is scanned by the use of a Terrestrial Laser Scanner (TLS) and Close Range Photogrammetry. Then, the 3D JHBIM based on the HOAL is designed on Revit Platform. Finally, this model is integrated to a 3D GIS solution through Autodesk InfraWorks. The shown analysis presented in this research highlights the importance of such integration especially for operational decisions and sharing the historical knowledge about Jeddah Historical City. Furthermore, one of the historical buildings in Old Jeddah, Nasif Historical House, was chosen as a test case for the project.

  20. Hands-on guide for 3D image creation for geological purposes

    Science.gov (United States)

    Frehner, Marcel; Tisato, Nicola

    2013-04-01

    Geological structures in outcrops or hand specimens are inherently three dimensional (3D), and therefore better understandable if viewed in 3D. While 3D models can easily be created, manipulated, and looked at from all sides on the computer screen (e.g., using photogrammetry or laser scanning data), 3D visualizations for publications or conference posters are much more challenging as they have to live in a 2D-world (i.e., on a sheet of paper). Perspective 2D visualizations of 3D models do not fully transmit the "feeling and depth of the third dimension" to the audience; but this feeling is desirable for a better examination and understanding in 3D of the structure under consideration. One of the very few possibilities to generate real 3D images, which work on a 2D display, is by using so-called stereoscopic images. Stereoscopic images are two images of the same object recorded from two slightly offset viewpoints. Special glasses and techniques have to be used to make sure that one image is seen only by one eye, and the other image is seen by the other eye, which together lead to the "3D effect". Geoscientists are often familiar with such 3D images. For example, geomorphologists traditionally view stereographic orthophotos by employing a mirror-steroscope. Nowadays, petroleum-geoscientists examine high-resolution 3D seismic data sets in special 3D visualization rooms. One of the methods for generating and viewing a stereoscopic image, which does not require a high-tech viewing device, is to create a so-called anaglyph. The principle is to overlay two images saturated in red and cyan, respectively. The two images are then viewed through red-cyan-stereoscopic glasses. This method is simple and cost-effective, but has some drawbacks in preserving colors accurately. A similar method is used in 3D movies, where polarized light or shuttering techniques are used to separate the left from the right image, which allows preserving the original colors. The advantage of red

  1. 3-D Image Encryption Based on Rubik's Cube and RC6 Algorithm

    Science.gov (United States)

    Helmy, Mai; El-Rabaie, El-Sayed M.; Eldokany, Ibrahim M.; El-Samie, Fathi E. Abd

    2017-12-01

    A novel encryption algorithm based on the 3-D Rubik's cube is proposed in this paper to achieve 3D encryption of a group of images. This proposed encryption algorithm begins with RC6 as a first step for encrypting multiple images, separately. After that, the obtained encrypted images are further encrypted with the 3-D Rubik's cube. The RC6 encrypted images are used as the faces of the Rubik's cube. From the concepts of image encryption, the RC6 algorithm adds a degree of diffusion, while the Rubik's cube algorithm adds a degree of permutation. The simulation results demonstrate that the proposed encryption algorithm is efficient, and it exhibits strong robustness and security. The encrypted images are further transmitted over wireless Orthogonal Frequency Division Multiplexing (OFDM) system and decrypted at the receiver side. Evaluation of the quality of the decrypted images at the receiver side reveals good results.

  2. 3D Fast Spin Echo T2-weighted Contrast for Imaging the Female Cervix

    Science.gov (United States)

    Vargas Sanchez, Andrea Fernanda

    Magnetic Resonance Imaging (MRI) with T2-weighted contrast is the preferred modality for treatment planning and monitoring of cervical cancer. Current clinical protocols image the volume of interest multiple times with two dimensional (2D) T2-weighted MRI techniques. It is of interest to replace these multiple 2D acquisitions with a single three dimensional (3D) MRI acquisition to save time. However, at present the image contrast of standard 3D MRI does not distinguish cervical healthy tissue from cancerous tissue. The purpose of this thesis is to better understand the underlying factors that govern the contrast of 3D MRI and exploit this understanding via sequence modifications to improve the contrast. Numerical simulations are developed to predict observed contrast alterations and to propose an improvement. Improvements of image contrast are shown in simulation and with healthy volunteers. Reported results are only preliminary but a promising start to establish definitively 3D MRI for cervical cancer applications.

  3. Developing Customized Dental Miniscrew Surgical Template from Thermoplastic Polymer Material Using Image Superimposition, CAD System, and 3D Printing

    OpenAIRE

    Wang, Yu-Tzu; Yu, Jian-Hong; Lo, Lun-Jou; Hsu, Pin-Hsin; Lin, CHun-Li

    2017-01-01

    This study integrates cone-beam computed tomography (CBCT)/laser scan image superposition, computer-aided design (CAD), and 3D printing (3DP) to develop a technology for producing customized dental (orthodontic) miniscrew surgical templates using polymer material. Maxillary bone solid models with the bone and teeth reconstructed using CBCT images and teeth and mucosa outer profile acquired using laser scanning were superimposed to allow miniscrew visual insertion planning and permit surgical ...

  4. An architecture for integrating planar and 3D cQED devices

    Energy Technology Data Exchange (ETDEWEB)

    Axline, C.; Reagor, M.; Heeres, R.; Reinhold, P.; Wang, C.; Shain, K.; Pfaff, W.; Chu, Y.; Frunzio, L.; Schoelkopf, R. J. [Department of Applied Physics, Yale University, New Haven, Connecticut 06511 (United States)

    2016-07-25

    Numerous loss mechanisms can limit coherence and scalability of planar and 3D-based circuit quantum electrodynamics (cQED) devices, particularly due to their packaging. The low loss and natural isolation of 3D enclosures make them good candidates for coherent scaling. We introduce a coaxial transmission line device architecture with coherence similar to traditional 3D cQED systems. Measurements demonstrate well-controlled external and on-chip couplings, a spectrum absent of cross-talk or spurious modes, and excellent resonator and qubit lifetimes. We integrate a resonator-qubit system in this architecture with a seamless 3D cavity, and separately pattern a qubit, readout resonator, Purcell filter, and high-Q stripline resonator on a single chip. Device coherence and its ease of integration make this a promising tool for complex experiments.

  5. 3D-Printed Disposable Wireless Sensors with Integrated Microelectronics for Large Area Environmental Monitoring

    KAUST Repository

    Farooqui, Muhammad Fahad

    2017-05-19

    Large area environmental monitoring can play a crucial role in dealing with crisis situations. However, it is challenging as implementing a fixed sensor network infrastructure over large remote area is economically unfeasible. This work proposes disposable, compact, dispersible 3D-printed wireless sensor nodes with integrated microelectronics which can be dispersed in the environment and work in conjunction with few fixed nodes for large area monitoring applications. As a proof of concept, the wireless sensing of temperature, humidity, and H2S levels are shown which are important for two critical environmental conditions namely forest fires and industrial leaks. These inkjet-printed sensors and an antenna are realized on the walls of a 3D-printed cubic package which encloses the microelectronics developed on a 3D-printed circuit board. Hence, 3D printing and inkjet printing are uniquely combined in order to realize a low-cost, fully integrated wireless sensor node.

  6. Metadata and Tools for Integration and Preservation of Cultural Heritage 3D Information

    Directory of Open Access Journals (Sweden)

    Achille Felicetti

    2011-12-01

    Full Text Available In this paper we investigate many of the various storage, portability and interoperability issues arising among archaeologists and cultural heritage people when dealing with 3D technologies. On the one side, the available digital repositories look often unable to guarantee affordable features in the management of 3D models and their metadata; on the other side the nature of most of the available data format for 3D encoding seem to be not satisfactory for the necessary portability required nowadays by 3D information across different systems. We propose a set of possible solutions to show how integration can be achieved through the use of well known and wide accepted standards for data encoding and data storage. Using a set of 3D models acquired during various archaeological campaigns and a number of open source tools, we have implemented a straightforward encoding process to generate meaningful semantic data and metadata. We will also present the interoperability process carried out to integrate the encoded 3D models and the geographic features produced by the archaeologists. Finally we will report the preliminary (rather encouraging development of a semantic enabled and persistent digital repository, where 3D models (but also any kind of digital data and metadata can easily be stored, retrieved and shared with the content of other digital archives.

  7. Extracting 3D parametric curves from 2D images of helical objects.

    OpenAIRE

    Willcocks, Chris; Jackson, Philip T.G.; Nelson, Carl J.; Obara, Boguslaw

    2016-01-01

    Helical objects occur in medicine, biology, cosmetics, nanotechnology, and engineering. Extracting a 3D parametric curve from a 2D image of a helical object has many practical applications, in particular being able to extract metrics such as tortuosity, frequency, and pitch. We present a method that is able to straighten the image object and derive a robust 3D helical curve from peaks in the object boundary. The algorithm has a small number of stable parameters that require little tuning, and...

  8. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    Science.gov (United States)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  9. A 3D bioprinting system to produce human-scale tissue constructs with structural integrity.

    Science.gov (United States)

    Kang, Hyun-Wook; Lee, Sang Jin; Ko, In Kap; Kengla, Carlos; Yoo, James J; Atala, Anthony

    2016-03-01

    A challenge for tissue engineering is producing three-dimensional (3D), vascularized cellular constructs of clinically relevant size, shape and structural integrity. We present an integrated tissue-organ printer (ITOP) that can fabricate stable, human-scale tissue constructs of any shape. Mechanical stability is achieved by printing cell-laden hydrogels together with biodegradable polymers in integrated patterns and anchored on sacrificial hydrogels. The correct shape of the tissue construct is achieved by representing clinical imaging data as a computer model of the anatomical defect and translating the model into a program that controls the motions of the printer nozzles, which dispense cells to discrete locations. The incorporation of microchannels into the tissue constructs facilitates diffusion of nutrients to printed cells, thereby overcoming the diffusion limit of 100-200 μm for cell survival in engineered tissues. We demonstrate capabilities of the ITOP by fabricating mandible and calvarial bone, cartilage and skeletal muscle. Future development of the ITOP is being directed to the production of tissues for human applications and to the building of more complex tissues and solid organs.

  10. Wide area 2D/3D imaging development, analysis and applications

    CERN Document Server

    Langmann, Benjamin

    2014-01-01

    Imaging technology is an important research area and it is widely utilized in a growing number of disciplines ranging from gaming, robotics and automation to medicine. In the last decade 3D imaging became popular mainly driven by the introduction of novel 3D cameras and measuring devices. These cameras are usually limited to indoor scenes with relatively low distances. Benjamin Langmann introduces medium and long-range 2D/3D cameras to overcome these limitations. He reports measurement results for these devices and studies their characteristic behavior. In order to facilitate the application o

  11. The Multiscale Bowler-Hat Transform for Vessel Enhancement in 3D Biomedical Images

    OpenAIRE

    Sazak, Cigdem; Nelson, Carl J.; Obara, Boguslaw

    2018-01-01

    Enhancement and detection of 3D vessel-like structures has long been an open problem as most existing image processing methods fail in many aspects, including a lack of uniform enhancement between vessels of different radii and a lack of enhancement at the junctions. Here, we propose a method based on mathematical morphology to enhance 3D vessel-like structures in biomedical images. The proposed method, 3D bowler-hat transform, combines sphere and line structuring elements to enhance vessel-l...

  12. A novel modeling method for manufacturing hearing aid using 3D medical images

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyeong Gyun [Dept of Radiological Science, Far East University, Eumseong (Korea, Republic of)

    2016-06-15

    This study aimed to suggest a novel method of modeling a hearing aid ear shell based on Digital Imaging and Communication in Medicine (DICOM) in the hearing aid ear shell manufacturing method using a 3D printer. In the experiment, a 3D external auditory meatus was extracted by using the critical values in the DICOM volume images, a nd t he modeling surface structures were compared in standard type STL (STereoLithography) files which could be recognized by a 3D printer. In this 3D modeling method, a conventional ear model was prepared, and the gaps between adjacent isograms produced by a 3D scanner were filled with 3D surface fragments to express the modeling structure. In this study, the same type of triangular surface structures were prepared by using the DICOM images. The result showed that the modeling surface structure based on the DICOM images provide the same environment that the conventional 3D printers may recognize, eventually enabling to print out the hearing aid ear shell shape.

  13. A novel modeling method for manufacturing hearing aid using 3D medical images

    International Nuclear Information System (INIS)

    Kim, Hyeong Gyun

    2016-01-01

    This study aimed to suggest a novel method of modeling a hearing aid ear shell based on Digital Imaging and Communication in Medicine (DICOM) in the hearing aid ear shell manufacturing method using a 3D printer. In the experiment, a 3D external auditory meatus was extracted by using the critical values in the DICOM volume images, a nd t he modeling surface structures were compared in standard type STL (STereoLithography) files which could be recognized by a 3D printer. In this 3D modeling method, a conventional ear model was prepared, and the gaps between adjacent isograms produced by a 3D scanner were filled with 3D surface fragments to express the modeling structure. In this study, the same type of triangular surface structures were prepared by using the DICOM images. The result showed that the modeling surface structure based on the DICOM images provide the same environment that the conventional 3D printers may recognize, eventually enabling to print out the hearing aid ear shell shape

  14. Signal alteration of the cochlear perilymph on 3 different sequences after intratympanic Gd-DTPA administration at 3 tesla. Comparison of 3D-FLAIR, 3D-T1-weighted imaging, and 3D-CISS

    International Nuclear Information System (INIS)

    Yamazaki, Masahiro; Naganawa, Shinji; Kawai, Hisashi; Nihashi, Takashi; Nakashima, Tsutomu

    2010-01-01

    Three-dimensional fluid-attenuated inversion recovery (3D-FLAIR) imaging after intratympanic gadolinium injection is useful for pathophysiologic and morphologic analysis of the inner ear. However, statistical analysis of differences in inner ear signal intensity among 3D-FLAIR and other sequences has not been reported. We evaluated the signal intensity of cochlear fluid on each of 3D-FLAIR, 3D-T 1 -weighted imaging (T 1 WI), and 3D-constructive interference in the steady state (CISS) to clarify the differences in contrast effect among these 3 sequences using intratympanic gadolinium injection. Twenty-one patients underwent 3D-FLAIR, 3D-T 1 WI, and 3D-CISS imaging at 3 tesla 24 hours after intratympanic injection of gadolinium. We determined regions of interest of the cochleae (C) and medulla oblongata (M) on each image, evaluated the signal intensity ratio between C and M (CM ratio), and determined the ratio of cochlear signal intensity of the injected side to that of the non-injected side (contrast value). The CM ratio of the injected side (3.00±1.31, range, 0.53 to 4.88, on 3D-FLAIR; 0.83±0.30, range, 0.36 to 1.58 on 3D-T 1 WI) was significantly higher than that of the non-injected side (0.52±0.14, range, 0.30 to 0.76 on 3D-FLAIR; 0.49±0.11, range, 0.30 to 0.71 on 3D-T 1 WI) on 3D-FLAIR and 3D-T 1 WI (P 1 WI (1.73±0.60 range, 0.98 to 3.09) (P<0.001). The 3D-FLAIR sequence is the most sensitive for observing alteration in inner ear fluid signal after intratympanic gadolinium injection. Our results warrant use of 3D-FLAIR as a sensitive imaging technique to clarify the pathological and morphological mechanisms of disorders of the inner ear. (author)

  15. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera

    Directory of Open Access Journals (Sweden)

    Yufu Qu

    2018-01-01

    Full Text Available In order to reconstruct three-dimensional (3D structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  16. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.

    Science.gov (United States)

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-14

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  17. Performance evaluation of 3-D enhancement filters for detection of lung cancer from 3-D chest X-ray CT images

    International Nuclear Information System (INIS)

    Shimizu, Akinobu; Hagai, Makoto; Toriwaki, Jun-ichiro; Hasegawa, Jun-ichi.

    1995-01-01

    This paper evaluates the performance of several three dimensional enhancement filters used in procedures for detecting lung cancer shadows from three dimensional (3D) chest X-ray CT images. Two dimensional enhancement filters such as Min-DD filter, Contrast filter and N-Quoit filter have been proposed for enhancing cancer shadows in conventional 2D X-ray images. In this paper, we extend each of these 2D filters to a 3D filter and evaluate its performance experimentally by using CT images with artificial and true lung cancer shadows. As a result, we find that these 3D filters are effective for determining the position of a lung cancer shadow in a 3D chest CT image, as compared with the simple procedure such as smoothing filter, and that the performance of these filters become lower in the hilar area due to the influence of the vessel shadows. (author)

  18. Integrating 3D Printing into an Early Childhood Teacher Preparation Course: Reflections on Practice

    Science.gov (United States)

    Sullivan, Pamela; McCartney, Holly

    2017-01-01

    This reflection on practice describes a case study integrating 3D printing into a creativity course for preservice teachers. The theoretical rationale is discussed, and the steps for integration are outlined. Student responses and reflections on the experience provide the basis for our analysis. Examples and resources are provided, as well as a…

  19. Artificial intelligence (AI)-based relational matching and multimodal medical image fusion: generalized 3D approaches

    Science.gov (United States)

    Vajdic, Stevan M.; Katz, Henry E.; Downing, Andrew R.; Brooks, Michael J.

    1994-09-01

    A 3D relational image matching/fusion algorithm is introduced. It is implemented in the domain of medical imaging and is based on Artificial Intelligence paradigms--in particular, knowledge base representation and tree search. The 2D reference and target images are selected from 3D sets and segmented into non-touching and non-overlapping regions, using iterative thresholding and/or knowledge about the anatomical shapes of human organs. Selected image region attributes are calculated. Region matches are obtained using a tree search, and the error is minimized by evaluating a `goodness' of matching function based on similarities of region attributes. Once the matched regions are found and the spline geometric transform is applied to regional centers of gravity, images are ready for fusion and visualization into a single 3D image of higher clarity.

  20. The Application of the Technology of 3D Satellite Cloud Imaging in Virtual Reality Simulation

    Directory of Open Access Journals (Sweden)

    Xiao-fang Xie

    2007-05-01

    Full Text Available Using satellite cloud images to simulate clouds is one of the new visual simulation technologies in Virtual Reality (VR. Taking the original data of satellite cloud images as the source, this paper depicts specifically the technology of 3D satellite cloud imaging through the transforming of coordinates and projection, creating a DEM (Digital Elevation Model of cloud imaging and 3D simulation. A Mercator projection was introduced to create a cloud image DEM, while solutions for geodetic problems were introduced to calculate distances, and the outer-trajectory science of rockets was introduced to obtain the elevation of clouds. For demonstration, we report on a computer program to simulate the 3D satellite cloud images.

  1. Coupling 2D/3D registration method and statistical model to perform 3D reconstruction from partial x-rays images data.

    Science.gov (United States)

    Cresson, T; Chav, R; Branchaud, D; Humbert, L; Godbout, B; Aubert, B; Skalli, W; De Guise, J A

    2009-01-01

    3D reconstructions of the spine from a frontal and sagittal radiographs is extremely challenging. The overlying features of soft tissues and air cavities interfere with image processing. It is also difficult to obtain information that is accurate enough to reconstruct complete 3D models. To overcome these problems, the proposed method efficiently combines the partial information contained in two images from a patient with a statistical 3D spine model generated from a database of scoliotic patients. The algorithm operates through two simultaneous iterating processes. The first one generates a personalized vertebra model using a 2D/3D registration process with bone boundaries extracted from radiographs, while the other one infers the position and the shape of other vertebrae from the current estimation of the registration process using a statistical 3D model. Experimental evaluations have shown good performances of the proposed approach in terms of accuracy and robustness when compared to CT-scan.

  2. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    Science.gov (United States)

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The

  3. Clinical significance of creative 3D-image fusion across multimodalities [PET + CT + MR] based on characteristic coregistration

    International Nuclear Information System (INIS)

    Peng, Matthew Jian-qiao; Ju Xiangyang; Khambay, Balvinder S.; Ayoub, Ashraf F.; Chen, Chin-Tu; Bai Bo

    2012-01-01

    Objective: To investigate a registration approach for 2-dimension (2D) based on characteristic localization to achieve 3-dimension (3D) fusion from images of PET, CT and MR one by one. Method: A cubic oriented scheme of“9-point and 3-plane” for co-registration design was verified to be geometrically practical. After acquisiting DICOM data of PET/CT/MR (directed by radiotracer 18 F-FDG etc.), through 3D reconstruction and virtual dissection, human internal feature points were sorted to combine with preselected external feature points for matching process. By following the procedure of feature extraction and image mapping, “picking points to form planes” and “picking planes for segmentation” were executed. Eventually, image fusion was implemented at real-time workstation mimics based on auto-fuse techniques so called “information exchange” and “signal overlay”. Result: The 2D and 3D images fused across modalities of [CT + MR], [PET + MR], [PET + CT] and [PET + CT + MR] were tested on data of patients suffered from tumors. Complementary 2D/3D images simultaneously presenting metabolic activities and anatomic structures were created with detectable-rate of 70%, 56%, 54% (or 98%) and 44% with no significant difference for each in statistics. Conclusion: Currently, based on the condition that there is no complete hybrid detector integrated of triple-module [PET + CT + MR] internationally, this sort of multiple modality fusion is doubtlessly an essential complement for the existing function of single modality imaging.

  4. 3D shape recovery from image focus using gray level co-occurrence matrix

    Science.gov (United States)

    Mahmood, Fahad; Munir, Umair; Mehmood, Fahad; Iqbal, Javaid

    2018-04-01

    Recovering a precise and accurate 3-D shape of the target object utilizing robust 3-D shape recovery algorithm is an ultimate objective of computer vision community. Focus measure algorithm plays an important role in this architecture which convert the color values of each pixel of the acquired 2-D image dataset into corresponding focus values. After convolving the focus measure filter with the input 2-D image dataset, a 3-D shape recovery approach is applied which will recover the depth map. In this document, we are concerned with proposing Gray Level Co-occurrence Matrix along with its statistical features for computing the focus information of the image dataset. The Gray Level Co-occurrence Matrix quantifies the texture present in the image using statistical features and then applies joint probability distributive function of the gray level pairs of the input image. Finally, we quantify the focus value of the input image using Gaussian Mixture Model. Due to its little computational complexity, sharp focus measure curve, robust to random noise sources and accuracy, it is considered as superior alternative to most of recently proposed 3-D shape recovery approaches. This algorithm is deeply investigated on real image sequences and synthetic image dataset. The efficiency of the proposed scheme is also compared with the state of art 3-D shape recovery approaches. Finally, by means of two global statistical measures, root mean square error and correlation, we claim that this approach -in spite of simplicity generates accurate results.

  5. 2D-Driven 3D Object Detection in RGB-D Images

    KAUST Repository

    Lahoud, Jean

    2017-12-25

    In this paper, we present a technique that places 3D bounding boxes around objects in an RGB-D scene. Our approach makes best use of the 2D information to quickly reduce the search space in 3D, benefiting from state-of-the-art 2D object detection techniques. We then use the 3D information to orient, place, and score bounding boxes around objects. We independently estimate the orientation for every object, using previous techniques that utilize normal information. Object locations and sizes in 3D are learned using a multilayer perceptron (MLP). In the final step, we refine our detections based on object class relations within a scene. When compared to state-of-the-art detection methods that operate almost entirely in the sparse 3D domain, extensive experiments on the well-known SUN RGB-D dataset [29] show that our proposed method is much faster (4.1s per image) in detecting 3D objects in RGB-D images and performs better (3 mAP higher) than the state-of-the-art method that is 4.7 times slower and comparably to the method that is two orders of magnitude slower. This work hints at the idea that 2D-driven object detection in 3D should be further explored, especially in cases where the 3D input is sparse.

  6. From 2D PET to 3D PET. Issues of data representation and image reconstruction

    International Nuclear Information System (INIS)

    Gundlich, B.; Musmann, P.; Weber, S.; Nix, O.; Semmler, W.

    2006-01-01

    Positron emission tomography (PET), intrinsically a 3D imaging technique, was for a long time exclusively operated in 2D mode, using septa to shield the detectors from photons emitted obliquely to the detector planes. However, the use of septa results in a considerable loss of sensitivity. From the late 1980s, significant efforts have been made to develop a methodology for the acquisition and reconstruction of 3D PET data. This paper focuses on the differences between data acquisition in 2D and 3D mode, especially in terms of data set sizes and representation. Although the real time data acquisition aspect in 3D has been mostly solved in modern PET scanner systems, there still remain questions on how to represent and how to make best use of the information contained in the acquired data sets. Data representation methods, such as list-mode and matrix-based methods, possibly with additional compression, will be discussed. Moving from 2D to 3D PET has major implications on the way these data are reconstructed to images. Two fundamentally different approaches exist, the analytical one and the iterative one. Both, at different expenses, can be extended to directly handle 3D data sets. Either way the computational burden increases heavily compared to 2D reconstruction. One possibility to benefit from the increased sensitivity in 3D PET while sticking to high-performance 2D reconstruction algorithms is to rebin 3D into 2D data sets. The value of data rebinning will be explored. An ever increasing computing power and the concept of distributed or parallel computing have made direct 3D reconstruction feasible. Following a short review of reconstruction methods and their extensions to 3D, we focus on numerical aspects that improve reconstruction performance, which is especially important in solving large equation systems in 3D iterative reconstruction. Finally exemplary results are shown to review the properties of the discussed algorithms. (orig.)

  7. 3D reconstruction of SEM images by use of optical photogrammetry software.

    Science.gov (United States)

    Eulitz, Mona; Reiss, Gebhard

    2015-08-01

    Reconstruction of the three-dimensional (3D) surface of an object to be examined is widely used for structure analysis in science and many biological questions require information about their true 3D structure. For Scanning Electron Microscopy (SEM) there has been no efficient non-destructive solution for reconstruction of the surface morphology to date. The well-known method of recording stereo pair images generates a 3D stereoscope reconstruction of a section, but not of the complete sample surface. We present a simple and non-destructive method of 3D surface reconstruction from SEM samples based on the principles of optical close range photogrammetry. In optical close range photogrammetry a series of overlapping photos is used to generate a 3D model of the surface of an object. We adapted this method to the special SEM requirements. Instead of moving a detector around the object, the object itself was rotated. A series of overlapping photos was stitched and converted into a 3D model using the software commonly used for optical photogrammetry. A rabbit kidney glomerulus was used to demonstrate the workflow of this adaption. The reconstruction produced a realistic and high-resolution 3D mesh model of the glomerular surface. The study showed that SEM micrographs are suitable for 3D reconstruction by optical photogrammetry. This new approach is a simple and useful method of 3D surface reconstruction and suitable for various applications in research and teaching. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer.

    Science.gov (United States)

    Douglas, David B; Boone, John M; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice.

  9. 3D digital stereophotogrammetry: a practical guide to facial image acquisition

    Directory of Open Access Journals (Sweden)

    Upson Kristen

    2010-07-01

    Full Text Available Abstract The use of 3D surface imaging technology is becoming increasingly common in craniofacial clinics and research centers. Due to fast capture speeds and ease of use, 3D digital stereophotogrammetry is quickly becoming the preferred facial surface imaging modality. These systems can serve as an unparalleled tool for craniofacial surgeons, proving an objective digital archive of the patient's face without exposure to radiation. Acquiring consistent high-quality 3D facial captures requires planning and knowledge of the limitations of these devices. Currently, there are few resources available to help new users of this technology with the challenges they will inevitably confront. To address this deficit, this report will highlight a number of common issues that can interfere with the 3D capture process and offer practical solutions to optimize image quality.

  10. Technical Note: A 3-D rendering algorithm for electromechanical wave imaging of a beating heart.

    Science.gov (United States)

    Nauleau, Pierre; Melki, Lea; Wan, Elaine; Konofagou, Elisa

    2017-09-01

    Arrhythmias can be treated by ablating the heart tissue in the regions of abnormal contraction. The current clinical standard provides electroanatomic 3-D maps to visualize the electrical activation and locate the arrhythmogenic sources. However, the procedure is time-consuming and invasive. Electromechanical wave imaging is an ultrasound-based noninvasive technique that can provide 2-D maps of the electromechanical activation of the heart. In order to fully visualize the complex 3-D pattern of activation, several 2-D views are acquired and processed separately. They are then manually registered with a 3-D rendering software to generate a pseudo-3-D map. However, this last step is operator-dependent and time-consuming. This paper presents a method to generate a full 3-D map of the electromechanical activation using multiple 2-D images. Two canine models were considered to illustrate the method: one in normal sinus rhythm and one paced from the lateral region of the heart. Four standard echographic views of each canine heart were acquired. Electromechanical wave imaging was applied to generate four 2-D activation maps of the left ventricle. The radial positions and activation timings of the walls were automatically extracted from those maps. In each slice, from apex to base, these values were interpolated around the circumference to generate a full 3-D map. In both cases, a 3-D activation map and a cine-loop of the propagation of the electromechanical wave were automatically generated. The 3-D map showing the electromechanical activation timings overlaid on realistic anatomy assists with the visualization of the sources of earlier activation (which are potential arrhythmogenic sources). The earliest sources of activation corresponded to the expected ones: septum for the normal rhythm and lateral for the pacing case. The proposed technique provides, automatically, a 3-D electromechanical activation map with a realistic anatomy. This represents a step towards a

  11. Fast segmentation of stained nuclei in terabyte-scale, time resolved 3D microscopy image stacks.

    Directory of Open Access Journals (Sweden)

    Johannes Stegmaier

    Full Text Available Automated analysis of multi-dimensional microscopy images has become an integral part of modern research in life science. Most available algorithms that provide sufficient segmentation quality, however, are infeasible for a large amount of data due to their high complexity. In this contribution we present a fast parallelized segmentation method that is especially suited for the extraction of stained nuclei from microscopy images, e.g., of developing zebrafish embryos. The idea is to transform the input image based on gradient and normal directions in the proximity of detected seed points such that it can be handled by straightforward global thresholding like Otsu's method. We evaluate the quality of the obtained segmentation results on a set of real and simulated benchmark images in 2D and 3D and show the algorithm's superior performance compared to other state-of-the-art algorithms. We achieve an up to ten-fold decrease in processing times, allowing us to process large data sets while still providing reasonable segmentation results.

  12. Feasibility of fabricating personalized 3D-printed bone grafts guided by high-resolution imaging

    Science.gov (United States)

    Hong, Abigail L.; Newman, Benjamin T.; Khalid, Arbab; Teter, Olivia M.; Kobe, Elizabeth A.; Shukurova, Malika; Shinde, Rohit; Sipzner, Daniel; Pignolo, Robert J.; Udupa, Jayaram K.; Rajapakse, Chamith S.

    2017-03-01

    Current methods of bone graft treatment for critical size bone defects can give way to several clinical complications such as limited available bone for autografts, non-matching bone structure, lack of strength which can compromise a patient's skeletal system, and sterilization processes that can prevent osteogenesis in the case of allografts. We intend to overcome these disadvantages by generating a patient-specific 3D printed bone graft guided by high-resolution medical imaging. Our synthetic model allows us to customize the graft for the patients' macro- and microstructure and correct any structural deficiencies in the re-meshing process. These 3D-printed models can presumptively serve as the scaffolding for human mesenchymal stem cell (hMSC) engraftment in order to facilitate bone growth. We performed highresolution CT imaging of a cadaveric human proximal femur at 0.030-mm isotropic voxels. We used these images to generate a 3D computer model that mimics bone geometry from micro to macro scale represented by STereoLithography (STL) format. These models were then reformatted to a format that can be interpreted by the 3D printer. To assess how much of the microstructure was replicated, 3D-printed models were re-imaged using micro-CT at 0.025-mm isotropic voxels and compared to original high-resolution CT images used to generate the 3D model in 32 sub-regions. We found a strong correlation between 3D-printed bone volume and volume of bone in the original images used for 3D printing (R2 = 0.97). We expect to further refine our approach with additional testing to create a viable synthetic bone graft with clinical functionality.

  13. Creation of 3D Multi-Body Orthodontic Models by Using Independent Imaging Sensors

    Directory of Open Access Journals (Sweden)

    Armando Viviano Razionale

    2013-02-01

    Full Text Available In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces through the digitalization of both patients’ mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning.

  14. Endoscopic Laser-Based 3D Imaging for Functional Voice Diagnostics

    Directory of Open Access Journals (Sweden)

    Marion Semmler

    2017-06-01

    Full Text Available Recently, we reported on the in vivo application of a miniaturized measuring device for 3D visualization of the superior vocal fold vibrations from high-speed recordings in combination with a laser projection unit (LPU. As a long-term vision for this proof of principle, we strive to integrate the further developed laserendoscopy as a diagnostic method in daily clinical routine. The new LPU mainly comprises a Nd:YAG laser source (532 nm/CW/2 ω and a diffractive optical element (DOE generating a regular laser grid (31 × 31 laser points that is projected on the vocal folds. By means of stereo triangulation, the 3D coordinates of the laser points are reconstructed from the endoscopic high-speed footage. The new design of the laserendoscope constitutes a compromise between robust image processing and laser safety regulations. The algorithms for calibration and analysis are now optimized with respect to their overall duration and the number of required interactions, which is objectively assessed using binary classifiers. The sensitivity and specificity of the calibration procedure are increased by 40.1% and 22.3%, which is statistically significant. The overall duration for the laser point detection is reduced by 41.9%. The suggested semi-automatic reconstruction software represents an important stepping-stone towards potential real time processing and a comprehensive, objective diagnostic tool of evidence-based medicine.

  15. 3D and 4D magnetic susceptibility tomography based on complex MR images

    Science.gov (United States)

    Chen, Zikuan; Calhoun, Vince D

    2014-11-11

    Magnetic susceptibility is the physical property for T2*-weighted magnetic resonance imaging (T2*MRI). The invention relates to methods for reconstructing an internal distribution (3D map) of magnetic susceptibility values, .chi. (x,y,z), of an object, from 3D T2*MRI phase images, by using Computed Inverse Magnetic Resonance Imaging (CIMRI) tomography. The CIMRI technique solves the inverse problem of the 3D convolution by executing a 3D Total Variation (TV) regularized iterative convolution scheme, using a split Bregman iteration algorithm. The reconstruction of .chi. (x,y,z) can be designed for low-pass, band-pass, and high-pass features by using a convolution kernel that is modified from the standard dipole kernel. Multiple reconstructions can be implemented in parallel, and averaging the reconstructions can suppress noise. 4D dynamic magnetic susceptibility tomography can be implemented by reconstructing a 3D susceptibility volume from a 3D phase volume by performing 3D CIMRI magnetic susceptibility tomography at each snapshot time.

  16. 3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

    International Nuclear Information System (INIS)

    Dhou, S; Hurwitz, M; Cai, W; Rottmann, J; Williams, C; Wagar, M; Berbeco, R; Lewis, J H; Mishra, P; Li, R; Ionascu, D

    2015-01-01

    3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery. (paper)

  17. 3D temporal subtraction on multislice CT images using nonlinear warping technique

    Science.gov (United States)

    Ishida, Takayuki; Katsuragawa, Shigehiko; Kawashita, Ikuo; Kim, Hyounseop; Itai, Yoshinori; Awai, Kazuo; Li, Qiang; Doi, Kunio

    2007-03-01

    The detection of very subtle lesions and/or lesions overlapped with vessels on CT images is a time consuming and difficult task for radiologists. In this study, we have developed a 3D temporal subtraction method to enhance interval changes between previous and current multislice CT images based on a nonlinear image warping technique. Our method provides a subtraction CT image which is obtained by subtraction of a previous CT image from a current CT image. Reduction of misregistration artifacts is important in the temporal subtraction method. Therefore, our computerized method includes global and local image matching techniques for accurate registration of current and previous CT images. For global image matching, we selected the corresponding previous section image for each current section image by using 2D cross-correlation between a blurred low-resolution current CT image and a blurred previous CT image. For local image matching, we applied the 3D template matching technique with translation and rotation of volumes of interests (VOIs) which were selected in the current and the previous CT images. The local shift vector for each VOI pair was determined when the cross-correlation value became the maximum in the 3D template matching. The local shift vectors at all voxels were determined by interpolation of shift vectors of VOIs, and then the previous CT image was nonlinearly warped according to the shift vector for each voxel. Finally, the warped previous CT image was subtracted from the current CT image. The 3D temporal subtraction method was applied to 19 clinical cases. The normal background structures such as vessels, ribs, and heart were removed without large misregistration artifacts. Thus, interval changes due to lung diseases were clearly enhanced as white shadows on subtraction CT images.

  18. Model-Based Referenceless Quality Metric of 3D Synthesized Images Using Local Image Description.

    Science.gov (United States)

    Gu, Ke; Jakhetiya, Vinit; Qiao, Jun-Fei; Li, Xiaoli; Lin, Weisi; Thalmann, Daniel

    2017-07-28

    New challenges have been brought out along with the emerging of 3D-related technologies such as virtual reality (VR), augmented reality (AR), and mixed reality (MR). Free viewpoint video (FVV), due to its applications in remote surveillance, remote education, etc, based on the flexible selection of direction and viewpoint, has been perceived as the development direction of next-generation video technologies and has drawn a wide range of researchers' attention. Since FVV images are synthesized via a depth image-based rendering (DIBR) procedure in the "blind" environment (without reference images), a reliable real-time blind quality evaluation and monitoring system is urgently required. But existing assessment metrics do not render human judgments faithfully mainly because geometric distortions are generated by DIBR. To this end, this paper proposes a novel referenceless quality metric of DIBR-synthesized images using the autoregression (AR)-based local image description. It was found that, after the AR prediction, the reconstructed error between a DIBR-synthesized image and its AR-predicted image can accurately capture the geometry distortion. The visual saliency is then leveraged to modify the proposed blind quality metric to a sizable margin. Experiments validate the superiority of our no-reference quality method as compared with prevailing full-, reduced- and no-reference models.

  19. In vivo 3D PIXE-micron-CT imaging of Drosophila melanogaster using a contrast agent

    Energy Technology Data Exchange (ETDEWEB)

    Matsuyama, Shigeo; Hamada, Naoki; Ishii, Keizo; Nozawa, Yuichiro; Ohkura, Satoru; Terakawa, Atsuki; Hatori, Yoshinobu; Fujiki, Kota; Fujiwara, Mitsuhiro; Toyama, Sho

    2015-04-01

    In this study, we developed a three-dimensional (3D) computed tomography (CT) in vivo imaging system for imaging small insects with micrometer resolution. The 3D CT imaging system, referred to as 3D PIXE-micron-CT (PIXEμCT), uses characteristic X-rays produced by ion microbeam bombardment of a metal target. PIXEμCT was used to observe the body organs and internal structure of a living Drosophila melanogaster. Although the organs of the thorax were clearly imaged, the digestive organs in the abdominal cavity could not be clearly discerned initially, with the exception of the rectum and the Malpighian tubule. To enhance the abdominal images, a barium sulfate powder radiocontrast agent was added. For the first time, 3D images of the ventriculus of a living D. melanogaster were obtained. Our results showed that PIXEμCT can provide in vivo 3D-CT images that reflect correctly the structure of individual living organs, which is expected to be very useful in biological research.

  20. Kalisphera: an analytical tool to reproduce the partial volume effect of spheres imaged in 3D

    International Nuclear Information System (INIS)

    Tengattini, Alessandro; Andò, Edward

    2015-01-01

    In experimental mechanics, where 3D imaging is having a profound effect, spheres are commonly adopted for their simplicity and for the ease of their modeling. In this contribution we develop an analytical tool, ‘kalisphera’, to produce 3D raster images of spheres including their partial volume effect. This allows us to evaluate the metrological performance of existing image-based measurement techniques (knowing a priori the ground truth). An advanced application of ‘kalisphera’ is developed here to identify and accurately characterize spheres in real 3D x-ray tomography images with the objective of improving trinarization and contact detection. The effect of the common experimental imperfections is assessed and the overall performance of the tool tested on real images. (paper)

  1. Technical evaluation of DIC helical CT and 3D image for laparoscopic cholecystectomy

    International Nuclear Information System (INIS)

    Shibuya, Kouki; Uchimura, Fumiaki; Haga, Tomo

    1995-01-01

    Recently Laparoscopic Cholecystectomy (L.C.) was widely accepted for its low invasive procedure. Before L.C., it is important to understand anatomical recognization of biliary tree. We examined DIC Helical CT before L.C., and reconstructed 3D Cholangiographic image. We evaluated physical potentiality of Helical CT using Section Sensitivity Profiles (SSP) with 5, 10 mm slice thickness on 360deg linear interpolation. And we analyzed most useful 3D image for biliary tree. Results showed the SSP depended on slice thickness (X-ray beam width) and table movement at same reconstruction spacing. The peak of SSP depended on slice thickness (X-ray beam width) and reconstruction spacing at same table movement. Clinically, it was necessary under 5 mm/rotation table movement and 5 mm thickness for acquiring volume image data. 3D Cholangiographic image reconstructed with 1 mm spacing image was useful in evaluation of relationship of anatomical biliary tree. (author)

  2. Technical Note: Characterization of custom 3D printed multimodality imaging phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Bieniosek, Matthew F. [Department of Electrical Engineering, Stanford University, 350 Serra Mall, Stanford, California 94305 (United States); Lee, Brian J. [Department of Mechanical Engineering, Stanford University, 440 Escondido Mall, Stanford, California 94305 (United States); Levin, Craig S., E-mail: cslevin@stanford.edu [Departments of Radiology, Physics, Bioengineering and Electrical Engineering, Stanford University, 300 Pasteur Dr., Stanford, California 94305-5128 (United States)

    2015-10-15

    Purpose: Imaging phantoms are important tools for researchers and technicians, but they can be costly and difficult to customize. Three dimensional (3D) printing is a widely available rapid prototyping technique that enables the fabrication of objects with 3D computer generated geometries. It is ideal for quickly producing customized, low cost, multimodal, reusable imaging phantoms. This work validates the use of 3D printed phantoms by comparing CT and PET scans of a 3D printed phantom and a commercial “Micro Deluxe” phantom. This report also presents results from a customized 3D printed PET/MRI phantom, and a customized high resolution imaging phantom with sub-mm features. Methods: CT and PET scans of a 3D printed phantom and a commercial Micro Deluxe (Data Spectrum Corporation, USA) phantom with 1.2, 1.6, 2.4, 3.2, 4.0, and 4.8 mm diameter hot rods were acquired. The measured PET and CT rod sizes, activities, and attenuation coefficients were compared. A PET/MRI scan of a custom 3D printed phantom with hot and cold rods was performed, with photon attenuation and normalization measurements performed with a separate 3D printed normalization phantom. X-ray transmission scans of a customized two level high resolution 3D printed phantom with sub-mm features were also performed. Results: Results show very good agreement between commercial and 3D printed micro deluxe phantoms with less than 3% difference in CT measured rod diameter, less than 5% difference in PET measured rod diameter, and a maximum of 6.2% difference in average rod activity from a 10 min, 333 kBq/ml (9 μCi/ml) Siemens Inveon (Siemens Healthcare, Germany) PET scan. In all cases, these differences were within the measurement uncertainties of our setups. PET/MRI scans successfully identified 3D printed hot and cold rods on PET and MRI modalities. X-ray projection images of a 3D printed high resolution phantom identified features as small as 350 μm wide. Conclusions: This work shows that 3D printed

  3. Technical Note: Characterization of custom 3D printed multimodality imaging phantoms

    International Nuclear Information System (INIS)

    Bieniosek, Matthew F.; Lee, Brian J.; Levin, Craig S.

    2015-01-01

    Purpose: Imaging phantoms are important tools for researchers and technicians, but they can be costly and difficult to customize. Three dimensional (3D) printing is a widely available rapid prototyping technique that enables the fabrication of objects with 3D computer generated geometries. It is ideal for quickly producing customized, low cost, multimodal, reusable imaging phantoms. This work validates the use of 3D printed phantoms by comparing CT and PET scans of a 3D printed phantom and a commercial “Micro Deluxe” phantom. This report also presents results from a customized 3D printed PET/MRI phantom, and a customized high resolution imaging phantom with sub-mm features. Methods: CT and PET scans of a 3D printed phantom and a commercial Micro Deluxe (Data Spectrum Corporation, USA) phantom with 1.2, 1.6, 2.4, 3.2, 4.0, and 4.8 mm diameter hot rods were acquired. The measured PET and CT rod sizes, activities, and attenuation coefficients were compared. A PET/MRI scan of a custom 3D printed phantom with hot and cold rods was performed, with photon attenuation and normalization measurements performed with a separate 3D printed normalization phantom. X-ray transmission scans of a customized two level high resolution 3D printed phantom with sub-mm features were also performed. Results: Results show very good agreement between commercial and 3D printed micro deluxe phantoms with less than 3% difference in CT measured rod diameter, less than 5% difference in PET measured rod diameter, and a maximum of 6.2% difference in average rod activity from a 10 min, 333 kBq/ml (9 μCi/ml) Siemens Inveon (Siemens Healthcare, Germany) PET scan. In all cases, these differences were within the measurement uncertainties of our setups. PET/MRI scans successfully identified 3D printed hot and cold rods on PET and MRI modalities. X-ray projection images of a 3D printed high resolution phantom identified features as small as 350 μm wide. Conclusions: This work shows that 3D printed

  4. Technical Note: Characterization of custom 3D printed multimodality imaging phantoms.

    Science.gov (United States)

    Bieniosek, Matthew F; Lee, Brian J; Levin, Craig S

    2015-10-01

    Imaging phantoms are important tools for researchers and technicians, but they can be costly and difficult to customize. Three dimensional (3D) printing is a widely available rapid prototyping technique that enables the fabrication of objects with 3D computer generated geometries. It is ideal for quickly producing customized, low cost, multimodal, reusable imaging phantoms. This work validates the use of 3D printed phantoms by comparing CT and PET scans of a 3D printed phantom and a commercial "Micro Deluxe" phantom. This report also presents results from a customized 3D printed PET/MRI phantom, and a customized high resolution imaging phantom with sub-mm features. CT and PET scans of a 3D printed phantom and a commercial Micro Deluxe (Data Spectrum Corporation, USA) phantom with 1.2, 1.6, 2.4, 3.2, 4.0, and 4.8 mm diameter hot rods were acquired. The measured PET and CT rod sizes, activities, and attenuation coefficients were compared. A PET/MRI scan of a custom 3D printed phantom with hot and cold rods was performed, with photon attenuation and normalization measurements performed with a separate 3D printed normalization phantom. X-ray transmission scans of a customized two level high resolution 3D printed phantom with sub-mm features were also performed. Results show very good agreement between commercial and 3D printed micro deluxe phantoms with less than 3% difference in CT measured rod diameter, less than 5% difference in PET measured rod diameter, and a maximum of 6.2% difference in average rod activity from a 10 min, 333 kBq/ml (9 μCi/ml) Siemens Inveon (Siemens Healthcare, Germany) PET scan. In all cases, these differences were within the measurement uncertainties of our setups. PET/MRI scans successfully identified 3D printed hot and cold rods on PET and MRI modalities. X-ray projection images of a 3D printed high resolution phantom identified features as small as 350 μm wide. This work shows that 3D printed phantoms can be functionally equivalent to

  5. Design for High Performance, Low Power, and Reliable 3D Integrated Circuits

    CERN Document Server

    Lim, Sung Kyu

    2013-01-01

    This book describes the design of through-silicon-via (TSV) based three-dimensional integrated circuits.  It includes details of numerous “manufacturing-ready” GDSII-level layouts of TSV-based 3D ICs, developed with tools covered in the book. Readers will benefit from the sign-off level analysis of timing, power, signal integrity, and thermo-mechanical reliability for 3D IC designs.  Coverage also includes various design-for-manufacturability (DFM), design-for-reliability (DFR), and design-for-testability (DFT) techniques that are considered critical to the 3D IC design process. Describes design issues and solutions for high performance and low power 3D ICs, such as the pros/cons of regular and irregular placement of TSVs, Steiner routing, buffer insertion, low power 3D clock routing, power delivery network design and clock design for pre-bond testability. Discusses topics in design-for-electrical-reliability for 3D ICs, such as TSV-to-TSV coupling, current crowding at the wire-to-TSV junction and the e...

  6. Effects of point configuration on the accuracy in 3D reconstruction from biplane images

    International Nuclear Information System (INIS)

    Dmochowski, Jacek; Hoffmann, Kenneth R.; Singh, Vikas; Xu Jinhui; Nazareth, Daryl P.

    2005-01-01

    Two or more angiograms are being used frequently in medical imaging to reconstruct locations in three-dimensional (3D) space, e.g., for reconstruction of 3D vascular trees, implanted electrodes, or patient positioning. A number of techniques have been proposed for this task. In this simulation study, we investigate the effect of the shape of the configuration of the points in 3D (the 'cloud' of points) on reconstruction errors for one of these techniques developed in our laboratory. Five types of configurations (a ball, an elongated ellipsoid (cigar), flattened ball (pancake), flattened cigar, and a flattened ball with a single distant point) are used in the evaluations. For each shape, 100 random configurations were generated, with point coordinates chosen from Gaussian distributions having a covariance matrix corresponding to the desired shape. The 3D data were projected into the image planes using a known imaging geometry. Gaussian distributed errors were introduced in the x and y coordinates of these projected points. Gaussian distributed errors were also introduced into the gantry information used to calculate the initial imaging geometry. The imaging geometries and 3D positions were iteratively refined using the enhanced-Metz-Fencil technique. The image data were also used to evaluate the feasible R-t solution volume. The 3D errors between the calculated and true positions were determined. The effects of the shape of the configuration, the number of points, the initial geometry error, and the input image error were evaluated. The results for the number of points, initial geometry error, and image error are in agreement with previously reported results, i.e., increasing the number of points and reducing initial geometry and/or image error, improves the accuracy of the reconstructed data. The shape of the 3D configuration of points also affects the error of reconstructed 3D configuration; specifically, errors decrease as the 'volume' of the 3D configuration

  7. The Use of 3d City Models Form Oblique Images on Land Administration

    Science.gov (United States)

    Bakici, S.; Erkek, B.; Ayyildiz, E.; Özmüş, L.

    2017-11-01

    The article 718 of the civil law saying "The ownership on property includes the air above and terrain layers below to an extent providing benefit. The structures, plants and sources are included in the content of this ownership reserving the legal restrictions" and the cadastre law no. 3402 envisage 3D Cadastre. 3D data is required in order to perform 3D cadastre. To meet this requirement, oblique photogrammetry arises as the main data acquisition method. The data obtained by this method is used as base in 3D Cadastre and Land Administration activities. 3D cadastre required in the context of land administration activities in Turkey demands high resolution aerial oblique images to be used in services such as real estate value assessment & marketing in urban areas, urban planning, unlicensed construction monitoring & city administration and making location data (national address data etc.) intelligent.

  8. THE USE OF 3D CITY MODELS FORM OBLIQUE IMAGES ON LAND ADMINISTRATION

    Directory of Open Access Journals (Sweden)

    S. Bakici

    2017-11-01

    Full Text Available The article 718 of the civil law saying “The ownership on property includes the air above and terrain layers below to an extent providing benefit. The structures, plants and sources are included in the content of this ownership reserving the legal restrictions” and the cadastre law no. 3402 envisage 3D Cadastre. 3D data is required in order to perform 3D cadastre. To meet this requirement, oblique photogrammetry arises as the main data acquisition method. The data obtained by this method is used as base in 3D Cadastre and Land Administration activities. 3D cadastre required in the context of land administration activities in Turkey demands high resolution aerial oblique images to be used in services such as real estate value assessment & marketing in urban areas, urban planning, unlicensed construction monitoring & city administration and making location data (national address data etc. intelligent.

  9. Integrated calibration of a 3D attitude sensor in large-scale metrology

    International Nuclear Information System (INIS)

    Gao, Yang; Lin, Jiarui; Yang, Linghui; Zhu, Jigui; Muelaner, Jody; Keogh, Patrick

    2017-01-01

    A novel calibration method is presented for a multi-sensor fusion system in large-scale metrology, which improves the calibration efficiency and reliability. The attitude sensor is composed of a pinhole prism, a converging lens, an area-array camera and a biaxial inclinometer. A mathematical model is established to determine its 3D attitude relative to a cooperative total station by using two vector observations from the imaging system and the inclinometer. There are two areas of unknown parameters in the measurement model that should be calibrated: the intrinsic parameters of the imaging model, and the transformation matrix between the camera and the inclinometer. An integrated calibration method using a three-axis rotary table and a total station is proposed. A single mounting position of the attitude sensor on the rotary table is sufficient to solve for all parameters of the measurement model. A correction technique for the reference laser beam of the total station is also presented to remove the need for accurate positioning of the sensor on the rotary table. Experimental verification has proved the practicality and accuracy of this calibration method. Results show that the mean deviations of attitude angles using the proposed method are less than 0.01°. (paper)

  10. The findings and the role of axial CT imaging and 3D imaging of gastric lesion by spiral CT

    International Nuclear Information System (INIS)

    Lee, Dong Ho; Ko, Young Tae

    1996-01-01

    The purpose of this study is to assess the efficacy of axial CT imaging and 3D imaging by spiral CT in the detection and evaluation of gastric lesion. Seventy-seven patients with pathologically-proven gastric lesions underwent axial CT and 3D imaging by spiral CT. There were 49 cases of advanced gastric carcinoma(AGC), 21 of early gastric carcinoma (EGC), three of benign ulcers, three of leiomyomas, and one case of lymphoma. Spiral CT was performed with 3-mm collimation, 4.5mm/sec table feed, and 1-1.5-mm reconstruction interval after the ingestion of gas. 3D imaging was obtained using the SSD technique, and on analysis a grade was given(excellent, good, poor). Axial CT scan was performed with 5-mm collimation, 7mm/sec table feed, and 5-mm reconstruction interval after the ingestion of water. Among 49 cases of AGC, excellent 3D images were obtained in seven patients (14.3%), good 3D images in 30(61.2%), and poor 3D images in 12(24.5%). Among the 12 patients with poor images, the cancers were located at the pyloric antrum in eight cases, were AGC Borrmann type 4 in three cases, and EGC-mimicking lesion in one case. Using axial CT scan alone, Borrmann's classification based tumor morphology were accurately identified in 67.3% of cases, but using 3D imaging, the corresponding figure was 85.7%. In 33 cases receiving surgery, good correlation between axial CT scan and pathology occurred in 72.7% of T class, and 69.7% of N class. Among 21 cases of EGC, excellent 3D images were obtained in three patients (14.3%), good 3D images in 14 (66.7%), and poor 3D images in two (9.5%). The other two cases of EGC were not detected. By axial CT scan, no tumor was detected in four cases, and there were two doubtful cases. 3D images of three benign ulcers were excellent in one case and good in two. 3D images of three leiomyomas and one lymphoma were excellent. Combined axial CT imaging and 3D imaging by spiral CT has the potential to accurately diagnose gastric lesions other than AGC

  11. Integrality and separability of multitouch interaction techniques in 3D manipulation tasks.

    Science.gov (United States)

    Martinet, Anthony; Casiez, Géry; Grisoni, Laurent

    2012-03-01

    Multitouch displays represent a promising technology for the display and manipulation of data. While the manipulation of 2D data has been widely explored, 3D manipulation with multitouch displays remains largely unexplored. Based on an analysis of the integration and separation of degrees of freedom, we propose a taxonomy for 3D manipulation techniques with multitouch displays. Using that taxonomy, we introduce Depth-Separated Screen-Space (DS3), a new 3D manipulation technique based on the separation of translation and rotation. In a controlled experiment, we compared DS3 with Sticky Tools and Screen-Space. Results show that separating the control of translation and rotation significantly affects performance for 3D manipulation, with DS3 performing faster than the two other techniques.

  12. Online 3D terrain visualisation using Unity 3D game engine: A comparison of different contour intervals terrain data draped with UAV images

    Science.gov (United States)

    Hafiz Mahayudin, Mohd; Che Mat, Ruzinoor

    2016-06-01

    The main objective of this paper is to discuss on the effectiveness of visualising terrain draped with Unmanned Aerial Vehicle (UAV) images generated from different contour intervals using Unity 3D game engine in online environment. The study area that was tested in this project was oil palm plantation at Sintok, Kedah. The contour data used for this study are divided into three different intervals which are 1m, 3m and 5m. ArcGIS software were used to clip the contour data and also UAV images data to be similar size for the overlaying process. The Unity 3D game engine was used as the main platform for developing the system due to its capabilities which can be launch in different platform. The clipped contour data and UAV images data were process and exported into the web format using Unity 3D. Then process continue by publishing it into the web server for comparing the effectiveness of different 3D terrain data (contour data) draped with UAV images. The effectiveness is compared based on the data size, loading time (office and out-of-office hours), response time, visualisation quality, and frame per second (fps). The results were suggest which contour interval is better for developing an effective online 3D terrain visualisation draped with UAV images using Unity 3D game engine. It therefore benefits decision maker and planner related to this field decide on which contour is applicable for their task.

  13. 3D detector and electronics integration technologies: Applications to ILC, SLHC, and beyond

    International Nuclear Information System (INIS)

    Lipton, Ronald

    2011-01-01

    The application of vertically integrated (3D) electronics to particle physics has been explored by the our group for the past several years. We have successfully designed the first vertically integrated demonstrator chip for ILC vertex detection in the three-tier MIT-Lincoln Labs process. We have also studied sensor integration with electronics through oxide bonding and silicon-on-insulator technology. This paper will discuss the status of these studies and prospects for future work.

  14. 3D detector and electronics integration technologies: Applications to ILC, SLHC, and beyond

    Energy Technology Data Exchange (ETDEWEB)

    Lipton, Ronald, E-mail: lipton@fnal.gov [Fermilab, P.O. Box 500, Batavia, IL 60510 (United States)

    2011-04-21

    The application of vertically integrated (3D) electronics to particle physics has been explored by the our group for the past several years. We have successfully designed the first vertically integrated demonstrator chip for ILC vertex detection in the three-tier MIT-Lincoln Labs process. We have also studied sensor integration with electronics through oxide bonding and silicon-on-insulator technology. This paper will discuss the status of these studies and prospects for future work.

  15. Comparative Accuracy of Facial Models Fabricated Using Traditional and 3D Imaging Techniques.

    Science.gov (United States)

    Lincoln, Ketu P; Sun, Albert Y T; Prihoda, Thomas J; Sutton, Alan J

    2016-04-01

    The purpose of this investigation was to compare the accuracy of facial models fabricated using facial moulage impression methods to the three-dimensional printed (3DP) fabrication methods using soft tissue images obtained from cone beam computed tomography (CBCT) and 3D stereophotogrammetry (3D-SPG) scans. A reference phantom model was fabricated using a 3D-SPG image of a human control form with ten fiducial markers placed on common anthropometric landmarks. This image was converted into the investigation control phantom model (CPM) using 3DP methods. The CPM was attached to a camera tripod for ease of image capture. Three CBCT and three 3D-SPG images of the CPM were captured. The DICOM and STL files from the three 3dMD and three CBCT were imported to the 3DP, and six testing models were made. Reversible hydrocolloid and dental stone were used to make three facial moulages of the CPM, and the impressions/casts were poured in type IV gypsum dental stone. A coordinate measuring machine (CMM) was used to measure the distances between each of the ten fiducial markers. Each measurement was made using one point as a static reference to the other nine points. The same measuring procedures were accomplished on all specimens. All measurements were compared between specimens and the control. The data were analyzed using ANOVA and Tukey pairwise comparison of the raters, methods, and fiducial markers. The ANOVA multiple comparisons showed significant difference among the three methods (p 3D-SPG showed statistical difference in comparison to the models fabricated using the traditional method of facial moulage and 3DP models fabricated from CBCT imaging. 3DP models fabricated using 3D-SPG were less accurate than the CPM and models fabricated using facial moulage and CBCT imaging techniques. © 2015 by the American College of Prosthodontists.

  16. Usefulness of 3D-VIBE method in breast dynamic MRI. Imaging parameters and contrasting effects

    International Nuclear Information System (INIS)

    Uchikoshi, Masato; Ueda, Takashi; Nishiki, Shigeo; Satou, Kouichi; Wada, Akihiko; Imaoka, Izumi; Matsuo, Michimasa

    2003-01-01

    MR imaging (MRI) has been reported to be a useful modality to characterize breast tumors and to evaluate disease extent. Contrast-enhanced dynamic MRI, in particular, allows breast lesions to be characterized with high sensitivity and specificity. Our study was designed to develop three-dimensional volumetric interpolated breath-hold examination (3D-VIBE) techniques for the evaluation of breast tumors. First, agarose/Gd-DTPA phantoms with various concentrations of Gd-DTPA were imaged using 3D-VIBE and turbo spin echo (TSE). Second, one of the phantoms was imaged with 3D-VIBE using different flip angles. Finally, water excitation (WE) and a chemical shift-selective (CHESS) pulse were applied to the images. Each image was analyzed for signal intensity, signal-to-noise ratio (1.25*Ms/Mb) (SNR), and contrast ratio [(Ms1-Ms2)/{(Ms1+Ms2)/2}]. The results showed that 3D-VIBE provided better contrast ratios with a linear fit than TSE, although 3D-VIBE showed a lower SNR. To reach the best contrast ratio, the optimized flip angle was found to be 30 deg for contrast-enhanced dynamic study. Both WE and CHESS pulses were reliable for obtaining fat- suppressed images. In conclusion, the 3D-VIBE technique can image the entire breast area with high resolution and provide better contrast than TSE. Our phantom study suggests that optimized 3D-VIBE may be useful for the assessment of breast tumors. (author)

  17. 3D-SIFT-Flow for atlas-based CT liver image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Yan, E-mail: xuyan04@gmail.com [State Key Laboratory of Software Development Environment and Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education, Beihang University, Beijing 100191, China and Research Institute of Beihang University in Shenzhen and Microsoft Research, Beijing 100080 (China); Xu, Chenchao, E-mail: chenchaoxu33@gmail.com; Kuang, Xiao, E-mail: kuangxiao.ace@gmail.com [School of Biological Science and Medical Engineering, Beihang University, Beijing 100191 (China); Wang, Hongkai, E-mail: wang.hongkai@gmail.com [Department of Biomedical Engineering, Dalian University of Technology, Dalian 116024 (China); Chang, Eric I-Chao, E-mail: eric.chang@microsoft.com [Microsoft Research, Beijing 100080 (China); Huang, Weimin, E-mail: wmhuang@i2r.a-star.edu.sg [Institute for Infocomm Research (I2R), Singapore 138632 (Singapore); Fan, Yubo, E-mail: yubofan@buaa.edu.cn [Key Laboratory of Biomechanics and Mechanobiology of Ministry of Education, Beihang University, Beijing 100191 (China)

    2016-05-15

    Purpose: In this paper, the authors proposed a new 3D registration algorithm, 3D-scale invariant feature transform (SIFT)-Flow, for multiatlas-based liver segmentation in computed tomography (CT) images. Methods: In the registration work, the authors developed a new registration method that takes advantage of dense correspondence using the informative and robust SIFT feature. The authors computed the dense SIFT features for the source image and the target image and designed an objective function to obtain the correspondence between these two images. Labeling of the source image was then mapped to the target image according to the former correspondence, resulting in accurate segmentation. In the fusion work, the 2D-based nonparametric label transfer method was extended to 3D for fusing the registered 3D atlases. Results: Compared with existing registration algorithms, 3D-SIFT-Flow has its particular advantage in matching anatomical structures (such as the liver) that observe large variation/deformation. The authors observed consistent improvement over widely adopted state-of-the-art registration methods such as ELASTIX, ANTS, and multiatlas fusion methods such as joint label fusion. Experimental results of liver segmentation on the MICCAI 2007 Grand Challenge are encouraging, e.g., Dice overlap ratio 96.27% ± 0.96% by our method compared with the previous state-of-the-art result of 94.90% ± 2.86%. Conclusions: Experimental results show that 3D-SIFT-Flow is robust for segmenting the liver from CT images, which has large tissue deformation and blurry boundary, and 3D label transfer is effective and efficient for improving the registration accuracy.

  18. 3D-SIFT-Flow for atlas-based CT liver image segmentation.

    Science.gov (United States)

    Xu, Yan; Xu, Chenchao; Kuang, Xiao; Wang, Hongkai; Chang, Eric I-Chao; Huang, Weimin; Fan, Yubo

    2016-05-01

    In this paper, the authors proposed a new 3D registration algorithm, 3D-scale invariant feature transform (SIFT)-Flow, for multiatlas-based liver segmentation in computed tomography (CT) images. In the registration work, the authors developed a new registration method that takes advantage of dense correspondence using the informative and robust SIFT feature. The authors computed the dense SIFT features for the source image and the target image and designed an objective function to obtain the correspondence between these two images. Labeling of the source image was then mapped to the target image according to the former correspondence, resulting in accurate segmentation. In the fusion work, the 2D-based nonparametric label transfer method was extended to 3D for fusing the registered 3D atlases. Compared with existing registration algorithms, 3D-SIFT-Flow has its particular advantage in matching anatomical structures (such as the liver) that observe large variation/deformation. The authors observed consistent improvement over widely adopted state-of-the-art registration methods such as ELASTIX, ANTS, and multiatlas fusion methods such as joint label fusion. Experimental results of liver segmentation on the MICCAI 2007 Grand Challenge are encouraging, e.g., Dice overlap ratio 96.27% ± 0.96% by our method compared with the previous state-of-the-art result of 94.90% ± 2.86%. Experimental results show that 3D-SIFT-Flow is robust for segmenting the liver from CT images, which has large tissue deformation and blurry boundary, and 3D label transfer is effective and efficient for improving the registration accuracy.

  19. 3D-SIFT-Flow for atlas-based CT liver image segmentation

    International Nuclear Information System (INIS)

    Xu, Yan; Xu, Chenchao; Kuang, Xiao; Wang, Hongkai; Chang, Eric I-Chao; Huang, Weimin; Fan, Yubo

    2016-01-01

    Purpose: In this paper, the authors proposed a new 3D registration algorithm, 3D-scale invariant feature transform (SIFT)-Flow, for multiatlas-based liver segmentation in computed tomography (CT) images. Methods: In the registration work, the authors developed a new registration method that takes advantage of dense correspondence using the informative and robust SIFT feature. The authors computed the dense SIFT features for the source image and the target image and designed an objective function to obtain the correspondence between these two images. Labeling of the source image was then mapped to the target image according to the former correspondence, resulting in accurate segmentation. In the fusion work, the 2D-based nonparametric label transfer method was extended to 3D for fusing the registered 3D atlases. Results: Compared with existing registration algorithms, 3D-SIFT-Flow has its particular advantage in matching anatomical structures (such as the liver) that observe large variation/deformation. The authors observed consistent improvement over widely adopted state-of-the-art registration methods such as ELASTIX, ANTS, and multiatlas fusion methods such as joint label fusion. Experimental results of liver segmentation on the MICCAI 2007 Grand Challenge are encouraging, e.g., Dice overlap ratio 96.27% ± 0.96% by our method compared with the previous state-of-the-art result of 94.90% ± 2.86%. Conclusions: Experimental results show that 3D-SIFT-Flow is robust for segmenting the liver from CT images, which has large tissue deformation and blurry boundary, and 3D label transfer is effective and efficient for improving the registration accuracy.

  20. Periodic additive noises reduction in 3D images used in building of voxel phantoms through an efficient implementation of the 3D FFT: zipper artifacts filtering

    International Nuclear Information System (INIS)

    Oliveira, Alex C.H. de; Lima, Fernando R.A.; Vieira, Jose W.; Leal Neto, Viriato

    2009-01-01

    The anthropomorphic models used in computational dosimetry are predominantly build from scanning CT (Computed Tomography) or MRI (Magnetic Resonance Imaging) image stacks obtained of patients or volunteers. The building of these stacks (usually called of voxel phantoms or tomography phantoms) requires computer processing to be used in an exposure computational model. Noises present in these stacks can be confused with significant structures. In a 3D image with periodic additive noise in the frequency domain, the noise is fully added to its central slice. The discrete Fourier transform is the fundamental mathematical tool that allows the switch of the spatial domain for the frequency domain, and vice versa. The FFT (fast Fourier transform) algorithm is an ideal computational tool for this switch in domain with efficiency. This paper presents a new methodology for implementation in managed C++ language (Microsoft Visual Studio R .NET) of the fast Fourier transform of 3D digital images (FFT3D) using, essentially, the trigonometric recombination. The reduction of periodic additive noise consists in filtering only the central slice of 3D image in the frequency domain and transforms it back into the spatial domain through the inverse FFT3D. An example of application of this method it is the zipper artifacts filtering in images of MRI. These processes were implemented in the software DIP (Digital Image Processing). (author)

  1. Discriminating between benign and malignant breast tumors using 3D convolutional neural network in dynamic contrast enhanced-MR images

    Science.gov (United States)

    Li, Jing; Fan, Ming; Zhang, Juan; Li, Lihua

    2017-03-01

    Convolutional neural networks (CNNs) are the state-of-the-art deep learning network architectures that can be used in a range of applications, including computer vision and medical image analysis. It exhibits a powerful representation learning mechanism with an automated design to learn features directly from the data. However, the common 2D CNNs only use the two dimension spatial information without evaluating the correlation between the adjoin slices. In this study, we established a method of 3D CNNs to discriminate between malignant and benign breast tumors. To this end, 143 patients were enrolled which include 66 benign and 77 malignant instances. The MRI images were pre-processed for noise reduction and breast tumor region segmentation. Data augmentation by spatial translating, rotating and vertical and horizontal flipping is applied to the cases to reduce possible over-fitting. A region-of-interest (ROI) and a volume-of-interest (VOI) were segmented in 2D and 3D DCE-MRI, respectively. The enhancement ratio for each MR series was calculated for the 2D and 3D images. The results for the enhancement ratio images in the two series are integrated for classification. The results of the area under the ROC curve(AUC) values are 0.739 and 0.801 for 2D and 3D methods, respectively. The results for 3D CNN which combined 5 slices for each enhancement ratio images achieved a high accuracy(Acc), sensitivity(Sens) and specificity(Spec) of 0.781, 0.744 and 0.823, respectively. This study indicates that 3D CNN deep learning methods can be a promising technology for breast tumor classification without manual feature extraction.

  2. Robotic 3D SQUID imaging system for practical nondestructive evaluation applications

    International Nuclear Information System (INIS)

    Isawa, K.; Nakayama, S.; Ikeda, M.; Takagi, S.; Tosaka, S.; Kasai, N.

    2005-01-01

    A robotic three-dimensional (3D) scanning superconducting quantum interference device (SQUID) imaging system was developed for practical nondestructive evaluation (NDE) applications. The major feature of this SQUID-NDE system is that the SQUID sensor itself scans in 3D by traveling over the surface of an object during testing without the need for magnetic shielding. This imaging system consists of (i) DC-SQUID gradiometer for effective movement of the sensor, (ii) SQUID sensor manipulator utilizing an articulated-type robot used in industry, (iii) laser charge-coupled-device (CCD) displacement sensor to measure the 3D coordinates of points on the surface of the object, and (iv) computer-aided numerical interpolation scheme for 3D surface reconstruction of the object. The applicability of this system for NDE was demonstrated by successfully detecting artificial damage of cylindrical-shaped steel tubes

  3. Detection of Connective Tissue Disorders from 3D Aortic MR Images Using Independent Component Analysis

    DEFF Research Database (Denmark)

    Hansen, Michael Sass; Zhao, Fei; Zhang, Honghai

    2006-01-01

    A computer-aided diagnosis (CAD) method is reported that allows the objective identification of subjects with connective tissue disorders from 3D aortic MR images using segmentation and independent component analysis (ICA). The first step to extend the model to 4D (3D + time) has also been taken....... ICA is an effective tool for connective tissue disease detection in the presence of sparse data using prior knowledge to order the components, and the components can be inspected visually. 3D+time MR image data sets acquired from 31 normal and connective tissue disorder subjects at end-diastole (R......-wave peak) and at 45\\$\\backslash\\$% of the R-R interval were used to evaluate the performance of our method. The automated 3D segmentation result produced accurate aortic surfaces covering the aorta. The CAD method distinguished between normal and connective tissue disorder subjects with a classification...

  4. Helical 3D-CT images of soft tissue tumors in the hand

    Energy Technology Data Exchange (ETDEWEB)

    Otani, Kazuhiro; Kikuchi, Hiraku; Tan, Akihiro; Hamanishi, Chiaki; Tanaka, Seisuke [Kinki Univ., Osaka-Sayama (Japan). School of Medicine

    2000-02-01

    X-ray, ultrasonograph CT, MRI and angiography are used to detect tumoral lesions. Recently, helical CT has been revealed to be a useful method for the diagnosis and preoperative evaluation of soft tissue tumors, by which high quality and accurate three dimensional (3D) images can be obtained quickly. We analyzed the preoperative 3D-CT images of soft tissue tumors in the hands of 11 cases (hemangioma in 6 cases, giant cell tumor, lipoma, angiofibroma, chondrosarcoma and malignant fibro-histiocytoma in one case each). Enhanced 3D-CT clearly visualized hemangiomas and solid tumors from the surrounding tissues. The tumors could easily be observed from any direction and color-coded according to the CT number. Helical 3D-CT was thus confirmed to be useful for the diagnosis and preoperative planning by indicating the details of tumor expansion into surrounding tissues. (author)

  5. "Black Bone" MRI: a novel imaging technique for 3D printing.

    Science.gov (United States)

    Eley, Karen A; Watt-Smith, Stephen R; Golding, Stephen J

    2017-03-01

    Three-dimensionally printed anatomical models are rapidly becoming an integral part of pre-operative planning of complex surgical cases. We have previously reported the "Black Bone" MRI technique as a non-ionizing alternative to CT. Segmentation of bone becomes possible by minimizing soft tissue contrast to enhance the bone-soft tissue boundary. The objectives of this study were to ascertain the potential of utilizing this technique to produce three-dimensional (3D) printed models. "Black Bone" MRI acquired from adult volunteers and infants with craniosynostosis were 3D rendered and 3D printed. A custom phantom provided a surrogate marker of accuracy permitting comparison between direct measurements and 3D printed models created by segmenting both CT and "Black Bone" MRI data sets using two different software packages. "Black Bone" MRI was successfully utilized to produce 3D models of the craniofacial skeleton in both adults and an infant. Measurements of the cube phantom and 3D printed models demonstrated submillimetre discrepancy. In this novel preliminary study exploring the potential of 3D printing from "Black Bone" MRI data, the feasibility of producing anatomical 3D models has been demonstrated, thus offering a potential non-ionizing alterative to CT for the craniofacial skeleton.

  6. Clinical applications of 2D and 3D CT imaging of the airways - a review

    International Nuclear Information System (INIS)

    Salvolini, Luca; Bichi Secchi, Elisabetta; Costarelli, Leonardo; De Nicola, Maurizio

    2000-01-01

    to detect otherwise overlooked slight pathological findings. In the exploration of the air-spaces of the head and neck, targeted multiplanar study can now be performed without additional scanning by retro-reconstructed sections from original transverse CT slices. Additional rendering can help in surgical planning, by simulation of surgical approaches, and allows better integration with functional paranasal sinuses endoscopic surgery, by endoscopic perspective rendering. Whichever application we perform, the clinical value of 2D and 3D rendering techniques lies in the possibility of overcoming perceptual difficulties and 'slice pollution', by easing more efficient data transfer without loss of information. 3D imaging should not be considered, in the large majority of cases, as a diagnostic tool: looking at reformatted images may increase diagnostic accuracy in only very few cases, but an increase in diagnostic confidence could be not negligible. The purpose of the radiologist skilled in post-processing techniques should be that of modifying patient management, by more confident diagnostic evaluation, in a small number of patients, and, in a larger number of cases, by simplifying communication with referring physicians and surgeons. We will display in detail possible clinical applications of the different 2D and 3D imaging techniques, in the study of the tracheobronchial tree, larynx, nasal cavities and paranasal sinuses by Helical CT, review relating bibliography, and briefly discuss pitfalls and perspectives of CT rendering techniques for each field

  7. Clinical applications of 2D and 3D CT imaging of the airways - a review

    Energy Technology Data Exchange (ETDEWEB)

    Salvolini, Luca E-mail: u.salvolini@popcsi.unian.it; Bichi Secchi, Elisabetta; Costarelli, Leonardo; De Nicola, Maurizio

    2000-04-01

    to detect otherwise overlooked slight pathological findings. In the exploration of the air-spaces of the head and neck, targeted multiplanar study can now be performed without additional scanning by retro-reconstructed sections from original transverse CT slices. Additional rendering can help in surgical planning, by simulation of surgical approaches, and allows better integration with functional paranasal sinuses endoscopic surgery, by endoscopic perspective rendering. Whichever application we perform, the clinical value of 2D and 3D rendering techniques lies in the possibility of overcoming perceptual difficulties and 'slice pollution', by easing more efficient data transfer without loss of information. 3D imaging should not be considered, in the large majority of cases, as a diagnostic tool: looking at reformatted images may increase diagnostic accuracy in only very few cases, but an increase in diagnostic confidence could be not negligible. The purpose of the radiologist skilled in post-processing techniques should be that of modifying patient management, by more confident diagnostic evaluation, in a small number of patients, and, in a larger number of cases, by simplifying communication with referring physicians and surgeons. We will display in detail possible clinical applications of the different 2D and 3D imaging techniques, in the study of the tracheobronchial tree, larynx, nasal cavities and paranasal sinuses by Helical CT, review relating bibliography, and briefly discuss pitfalls and perspectives of CT rendering techniques for each field.

  8. Hybrid 3D pregnant woman and fetus modeling from medical imaging for dosimetry studies

    Energy Technology Data Exchange (ETDEWEB)

    Bibin, Lazar; Anquez, Jeremie; Angelini, Elsa; Bloch, Isabelle [Telecom ParisTech, CNRS UMR 5141 LTCI, Institut TELECOM, Paris (France)

    2010-01-15

    Numerical simulations studying the interactions between radiations and biological tissues require the use of three-dimensional models of the human anatomy at various ages and in various positions. Several detailed and flexible models exist for adults and children and have been extensively used for dosimetry. On the other hand, progress of simulation studies focusing on pregnant women and the fetus have been limited by the fact that only a small number of models exist with rather coarse anatomical details and a poor representation of the anatomical variability of the fetus shape and its position over the entire gestation. In this paper, we propose a new computational framework to generate 3D hybrid models of pregnant women, composed of fetus shapes segmented from medical images and a generic maternal body envelope representing a synthetic woman scaled to the dimension of the uterus. The computational framework includes the following tasks: image segmentation, contour regularization, mesh-based surface reconstruction, and model integration. A series of models was created to represent pregnant women at different gestational stages and with the fetus in different positions, all including detailed tissues of the fetus and the utero-fetal unit, which play an important role in dosimetry. These models were anatomically validated by clinical obstetricians and radiologists who verified the accuracy and representativeness of the anatomical details, and the positioning of the fetus inside the maternal body. The computational framework enables the creation of detailed, realistic, and representative fetus models from medical images, directly exploitable for dosimetry simulations. (orig.)

  9. Hybrid 3D pregnant woman and fetus modeling from medical imaging for dosimetry studies

    International Nuclear Information System (INIS)

    Bibin, Lazar; Anquez, Jeremie; Angelini, Elsa; Bloch, Isabelle

    2010-01-01

    Numerical simulations studying the interactions between radiations and biological tissues require the use of three-dimensional models of the human anatomy at various ages and in various positions. Several detailed and flexible models exist for adults and children and have been extensively used for dosimetry. On the other hand, progress of simulation studies focusing on pregnant women and the fetus have been limited by the fact that only a small number of models exist with rather coarse anatomical details and a poor representation of the anatomical variability of the fetus shape and its position over the entire gestation. In this paper, we propose a new computational framework to generate 3D hybrid models of pregnant women, composed of fetus shapes segmented from medical images and a generic maternal body envelope representing a synthetic woman scaled to the dimension of the uterus. The computational framework includes the following tasks: image segmentation, contour regularization, mesh-based surface reconstruction, and model integration. A series of models was created to represent pregnant women at different gestational stages and with the fetus in different positions, all including detailed tissues of the fetus and the utero-fetal unit, which play an important role in dosimetry. These models were anatomically validated by clinical obstetricians and radiologists who verified the accuracy and representativeness of the anatomical details, and the positioning of the fetus inside the maternal body. The computational framework enables the creation of detailed, realistic, and representative fetus models from medical images, directly exploitable for dosimetry simulations. (orig.)

  10. Evaluation of 3D printing materials for fabrication of a novel multi-functional 3D thyroid phantom for medical dosimetry and image quality

    International Nuclear Information System (INIS)

    Alssabbagh, Moayyad; Tajuddin, Abd Aziz; Abdulmanap, Mahayuddin; Zainon, Rafidah

    2017-01-01

    Recently, the three-dimensional printer has started to be utilized strongly in medical industries. In the human body, many parts or organs can be printed from 3D images to meet accurate organ geometries. In this study, five common 3D printing materials were evaluated in terms of their elementary composition and the mass attenuation coefficients. The online version of XCOM photon cross-section database was used to obtain the attenuation values of each material. The results were compared with the attenuation values of the thyroid listed in the International Commission on Radiation Units and Measurements - ICRU 44. Two original thyroid models (hollow-inside and solid-inside) were designed from scratch to be used in nuclear medicine, diagnostic radiology and radiotherapy for dosimetry and image quality purposes. Both designs have three holes for installation of radiation dosimeters. The hollow-inside model has more two holes in the top for injection the radioactive materials. The attenuation properties of the Polylactic Acid (PLA) material showed a very good match with the thyroid tissue, which it was selected to 3D print the phantom using open source RepRap, Prusa i3 3D printer. The scintigraphy images show that the phantom simulates a real healthy thyroid gland and thus it can be used for image quality purposes. The measured CT numbers of the PA material after the 3D printing show a close match with the human thyroid CT numbers. Furthermore, the phantom shows a good accommodation of the TLD dosimeters inside the holes. The 3D fabricated thyroid phantom simulates the real shape of the human thyroid gland with a changeable geometrical shape-size feature to fit different age groups. By using 3D printing technology, the time required to fabricate the 3D phantom was considerably shortened compared to the longer conventional methods, where it took only 30 min to print out the model. The 3D printing material used in this study is commercially available and cost

  11. 3D Tendon Strain Estimation Using High-frequency Volumetric Ultrasound Images: A Feasibility Study.

    Science.gov (United States)

    Carvalho, Catarina; Slagmolen, Pieter; Bogaerts, Stijn; Scheys, Lennart; D'hooge, Jan; Peers, Koen; Maes, Frederik; Suetens, Paul

    2018-03-01

    Estimation of strain in tendons for tendinopathy assessment is a hot topic within the sports medicine community. It is believed that, if accurately estimated, existing treatment and rehabilitation protocols can be improved and presymptomatic abnormalities can be detected earlier. State-of-the-art studies present inaccurate and highly variable strain estimates, leaving this problem without solution. Out-of-plane motion, present when acquiring two-dimensional (2D) ultrasound (US) images, is a known problem and may be responsible for such errors. This work investigates the benefit of high-frequency, three-dimensional (3D) US imaging to reduce errors in tendon strain estimation. Volumetric US images were acquired in silico, in vitro, and ex vivo using an innovative acquisition approach that combines the acquisition of 2D high-frequency US images with a mechanical guided system. An affine image registration method was used to estimate global strain. 3D strain estimates were then compared with ground-truth values and with 2D strain estimates. The obtained results for in silico data showed a mean absolute error (MAE) of 0.07%, 0.05%, and 0.27% for 3D estimates along axial, lateral direction, and elevation direction and a respective MAE of 0.21% and 0.29% for 2D strain estimates. Although 3D could outperform 2D, this does not occur in in vitro and ex vivo settings, likely due to 3D acquisition artifacts. Comparison against the state-of-the-art methods showed competitive results. The proposed work shows that 3D strain estimates are more accurate than 2D estimates but acquisition of appropriate 3D US images remains a challenge.

  12. Intersection based motion correction of multislice MRI for 3-D in utero fetal brain image formation.

    Science.gov (United States)

    Kim, Kio; Habas, Piotr A; Rousseau, Francois; Glenn, Orit A; Barkovich, Anthony J; Studholme, Colin

    2010-01-01

    In recent years, postprocessing of fast multislice magnetic resonance imaging (MRI) to correct fetal motion has provided the first true 3-D MR images of the developing human brain in utero. Early approaches have used reconstruction based algorithms, employing a two-step iterative process, where slices from the acquired data are realigned to an approximate 3-D reconstruction of the fetal brain, which is then refined further using the improved slice alignment. This two step slice-to-volume process, although powerful, is computationally expensive in needing a 3-D reconstruction, and is limited in its ability to recover subvoxel alignment. Here, we describe an alternative approach which we term slice intersection motion correction (SIMC), that seeks to directly co-align multiple slice stacks by considering the matching structure along all intersecting slice pairs in all orthogonally planned slices that are acquired in clinical imaging studies. A collective update scheme for all slices is then derived, to simultaneously drive slices into a consistent match along their lines of intersection. We then describe a 3-D reconstruction algorithm that, using the final motion corrected slice locations, suppresses through-plane partial volume effects to provide a single high isotropic resolution 3-D image. The method is tested on simulated data with known motions and is applied to retrospectively reconstruct 3-D images from a range of clinically acquired imaging studies. The quantitative evaluation of the registration accuracy for the simulated data sets demonstrated a significant improvement over previous approaches. An initial application of the technique to studying clinical pathology is included, where the proposed method recovered up to 15 mm of translation and 30 degrees of rotation for individual slices, and produced full 3-D reconstructions containing clinically useful additional information not visible in the original 2-D slices.

  13. Procedure for making mannequins tailor for image quality control of PET by 3D printing systems

    International Nuclear Information System (INIS)

    Collado Chamorro, P. M.; Saez Beltran, F.; Diaz Pascual, V.; Benito Bejarado, M. A.; Sanz Freire, C. J.; Lopo Casqueiro, N.; Gonzalez Fernandez, M. P.; Lopez de Gamarra, M. S.

    2015-01-01

    There is a software free both for be the processes of modeling of the objects 3D to split of medical images, as for convert said modeling to file ready for be read and executed by the 3D printers (sequence or slicer). This lets make mannequins of Control of quality with a investment minimum. In this work is built a mannequin of brain refillable to measurement for be used in studies PET. (Author)

  14. MRI of the cartilages of the knee, 3-D imaging with a rapid computer system

    Energy Technology Data Exchange (ETDEWEB)

    Adam, G.; Bohndorf, K.; Prescher, A.; Drobnitzky, M.; Guenther, R.W.

    1989-01-01

    2-D spin-echo sequences were compared with 3-D gradient-echo sequences using normal and cadaver knee joints. The important advantages of 3-D-imaging are: sections of less than 1 mm, reconstruction in any required plane, which can be related to the complex anatomy of the knee joint, and very good distinction between intra-articular fluid, fibrocartilage and hyaline cartilage. (orig./GDG).

  15. A three-dimensional gradient refocused 3D volume imaging of discoid lateral meniscus

    International Nuclear Information System (INIS)

    Araki, Yutaka; Ootani, Masatoshi; Furukawa, Tomoaki; Yamamoto, Tadatsuka; Tomoda, Kaname; Tsukaguchi, Isao; Mitomo, Masanori.

    1991-01-01

    An axial 3D volume scan with MRI was applied to the evaluation of discoid lateral meniscus of the knee. By 0.7 mm-thick thin sliced and gapless images with volume scan, characteristically elongated appearance of discoid lateral meniscus was clearly depicted. These MR findings completely accorded with those on arthroscopy. Our conclusion is that an axial 3D volume scan was essential to the diagnosis of discoid lateral meniscus. (author)

  16. Curvature histogram features for retrieval of images of smooth 3D objects

    International Nuclear Information System (INIS)

    Zhdanov, I; Scherbakov, O; Potapov, A; Peterson, M

    2014-01-01

    We consider image features on the base of histograms of oriented gradients (HOG) with addition of contour curvature histogram (HOG-CH), and also compare it with results of known scale-invariant feature transform (SIFT) approach in application to retrieval of images of smooth 3D objects.

  17. Atlas-based mosaicing of 3D transesophageal echocardiography images of the left atrium

    NARCIS (Netherlands)

    Mulder, H.W. (Harriët); Pluim, J.P.W.; Ren, B. (Ben); Haak, A. (Alexander); Viergever, M.A. (Max); Bosch, J.G. (Johan); Stralen, van M. (Marijn)

    2015-01-01

    3D transesophageal echocardiography (TEE) is routinely used for planning and guidance of cardiac interventions. However, the limited field-of-view dictates the compounding of multiple images for visualization of large structures, e.g. the left atrium (LA). Previously, we developed a TEE image

  18. Automatic slice identification in 3D medical images with a ConvNet regressor

    NARCIS (Netherlands)

    de Vos, Bob D.; Viergever, Max A.; de Jong, Pim A.; Išgum, Ivana

    2016-01-01

    Identification of anatomical regions of interest is a prerequisite in many medical image analysis tasks. We propose a method that automatically identifies a slice of interest (SOI) in 3D images with a convolutional neural network (ConvNet) regressor. In 150 chest CT scans two reference slices were

  19. Simultaneous cell tracking and image alignment in 3D CLSM imagery of growing arabidopsis thaliana sepals

    NARCIS (Netherlands)

    Fick, R.H.J.; Fedorov, D.; Roeder, A.H.K.; Manjunath, B.S.

    2013-01-01

    In this research we propose a combined cell matching and image alignment method for tracking cells based on their nuclear locations in 3D fluorescent Confocal Laser Scanning Microscopy (CLSM) image sequences. We then apply it to study the cell division pattern in the developing sepal of the small

  20. Structured light 3D tracking system for measuring motions in PET brain imaging

    DEFF Research Database (Denmark)

    Olesen, Oline Vinter; Jørgensen, Morten Rudkjær; Paulsen, Rasmus Reinhold

    2010-01-01

    Patient motion during scanning deteriorates image quality, especially for high resolution PET scanners. A new proposal for a 3D head tracking system for motion correction in high resolution PET brain imaging is set up and demonstrated. A prototype tracking system based on structured light with a ...

  1. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    Directory of Open Access Journals (Sweden)

    Saeed Seyyedi

    2013-01-01

    Full Text Available Digital breast tomosynthesis (DBT is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART and total variation regularized reconstruction techniques (ART+TV are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM values.

  2. An object-oriented simulator for 3D digital breast tomosynthesis imaging system.

    Science.gov (United States)

    Seyyedi, Saeed; Cengiz, Kubra; Kamasak, Mustafa; Yildirim, Isa

    2013-01-01

    Digital breast tomosynthesis (DBT) is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART) were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT) imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI) helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART) and total variation regularized reconstruction techniques (ART+TV) are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM) values.

  3. Integrated fringe projection 3D scanning system for large-scale metrology based on laser tracker

    Science.gov (United States)

    Du, Hui; Chen, Xiaobo; Zhou, Dan; Guo, Gen; Xi, Juntong

    2017-10-01

    Large scale components exist widely in advance manufacturing industry,3D profilometry plays a pivotal role for the quality control. This paper proposes a flexible, robust large-scale 3D scanning system by integrating a robot with a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. And a mathematical model is established for the global data fusion. Subsequently, a flexible and robust method and mechanism is introduced for the establishment of the end coordination system. Based on this method, a virtual robot noumenon is constructed for hand-eye calibration. And then the transformation matrix between end coordination system and world coordination system is solved. Validation experiment is implemented for verifying the proposed algorithms. Firstly, hand-eye transformation matrix is solved. Then a car body rear is measured for 16 times for the global data fusion algorithm verification. And the 3D shape of the rear is reconstructed successfully.

  4. Rigorous accuracy assessment for 3D reconstruction using time-series Dual Fluoroscopy (DF) image pairs

    Science.gov (United States)

    Al-Durgham, Kaleel; Lichti, Derek D.; Kuntze, Gregor; Ronsky, Janet

    2017-06-01

    High-speed biplanar videoradiography, or clinically referred to as dual fluoroscopy (DF), imaging systems are being used increasingly for skeletal kinematics analysis. Typically, a DF system comprises two X-ray sources, two image intensifiers and two high-speed video cameras. The combination of these elements provides time-series image pairs of articulating bones of a joint, which permits the measurement of bony rotation and translation in 3D at high temporal resolution (e.g., 120-250 Hz). Assessment of the accuracy of 3D measurements derived from DF imaging has been the subject of recent research efforts by several groups, however with methodological limitations. This paper presents a novel and simple accuracy assessment procedure based on using precise photogrammetric tools. We address the fundamental photogrammetry principles for the accuracy evaluation of an imaging system. Bundle adjustment with selfcalibration is used for the estimation of the system parameters. The bundle adjustment calibration uses an appropriate sensor model and applies free-network constraints and relative orientation stability constraints for a precise estimation of the system parameters. A photogrammetric intersection of time-series image pairs is used for the 3D reconstruction of a rotating planar object. A point-based registration method is used to combine the 3D coordinates from the intersection and independently surveyed coordinates. The final DF accuracy measure is reported as the distance between 3D coordinates from image intersection and the independently surveyed coordinates. The accuracy assessment procedure is designed to evaluate the accuracy over the full DF image format and a wide range of object rotation. Experiment of reconstruction of a rotating planar object reported an average positional error of 0.44 +/- 0.2 mm in the derived 3D coordinates (minimum 0.05 and maximum 1.2 mm).

  5. Parallax barrier engineering for image quality improvement in an autostereoscopic 3D display.

    Science.gov (United States)

    Kim, Sung-Kyu; Yoon, Ki-Hyuk; Yoon, Seon Kyu; Ju, Heongkyu

    2015-05-18

    We present a image quality improvement in a parallax barrier (PB)-based multiview autostereoscopic 3D display system under a real-time tracking of positions of a viewer's eyes. The system presented exploits a parallax barrier engineered to offer significantly improved quality of three-dimensional images for a moving viewer without an eyewear under the dynamic eye tracking. The improved image quality includes enhanced uniformity of image brightness, reduced point crosstalk, and no pseudoscopic effects. We control the relative ratio between two parameters i.e., a pixel size and the aperture of a parallax barrier slit to improve uniformity of image brightness at a viewing zone. The eye tracking that monitors positions of a viewer's eyes enables pixel data control software to turn on only pixels for view images near the viewer's eyes (the other pixels turned off), thus reducing point crosstalk. The eye tracking combined software provides right images for the respective eyes, therefore producing no pseudoscopic effects at its zone boundaries. The viewing zone can be spanned over area larger than the central viewing zone offered by a conventional PB-based multiview autostereoscopic 3D display (no eye tracking). Our 3D display system also provides multiviews for motion parallax under eye tracking. More importantly, we demonstrate substantial reduction of point crosstalk of images at the viewing zone, its level being comparable to that of a commercialized eyewear-assisted 3D display system. The multiview autostereoscopic 3D display presented can greatly resolve the point crosstalk problem, which is one of the critical factors that make it difficult for previous technologies for a multiview autostereoscopic 3D display to replace an eyewear-assisted counterpart.

  6. AUTOMATIC TEXTURE RECONSTRUCTION OF 3D CITY MODEL FROM OBLIQUE IMAGES

    Directory of Open Access Journals (Sweden)

    J. Kang

    2016-06-01

    Full Text Available In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  7. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available.

  8. 3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading

    Science.gov (United States)

    Cho, Nam-Hoon; Choi, Heung-Kook

    2014-01-01

    One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system. PMID:25371701

  9. 3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading

    Directory of Open Access Journals (Sweden)

    Tae-Yun Kim

    2014-01-01

    Full Text Available One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system.

  10. Integrated computer-aided forensic case analysis, presentation, and documentation based on multimodal 3D data.

    Science.gov (United States)

    Bornik, Alexander; Urschler, Martin; Schmalstieg, Dieter; Bischof, Horst; Krauskopf, Astrid; Schwark, Thorsten; Scheurer, Eva; Yen, Kathrin

    2018-06-01

    Three-dimensional (3D) crime scene documentation using 3D scanners and medical imaging modalities like computed tomography (CT) and magnetic resonance imaging (MRI) are increasingly applied in forensic casework. Together with digital photography, these modalities enable comprehensive and non-invasive recording of forensically relevant information regarding injuries/pathologies inside the body and on its surface. Furthermore, it is possible to capture traces and items at crime scenes. Such digitally secured evidence has the potential to similarly increase case understanding by forensic experts and non-experts in court. Unlike photographs and 3D surface models, images from CT and MRI are not self-explanatory. Their interpretation and understanding requires radiological knowledge. Findings in tomography data must not only be revealed, but should also be jointly studied with all the 2D and 3D data available in order to clarify spatial interrelations and to optimally exploit the data at hand. This is technically challenging due to the heterogeneous data representations including volumetric data, polygonal 3D models, and images. This paper presents a novel computer-aided forensic toolbox providing tools to support the analysis, documentation, annotation, and illustration of forensic cases using heterogeneous digital data. Conjoint visualization of data from different modalities in their native form and efficient tools to visually extract and emphasize findings help experts to reveal unrecognized correlations and thereby enhance their case understanding. Moreover, the 3D case illustrations created for case analysis represent an efficient means to convey the insights gained from case analysis to forensic non-experts involved in court proceedings like jurists and laymen. The capability of the presented approach in the context of case analysis, its potential to speed up legal procedures and to ultimately enhance legal certainty is demonstrated by introducing a number of

  11. Externally calibrated parallel imaging for 3D multispectral imaging near metallic implants using broadband ultrashort echo time imaging.

    Science.gov (United States)

    Wiens, Curtis N; Artz, Nathan S; Jang, Hyungseok; McMillan, Alan B; Reeder, Scott B

    2017-06-01

    To develop an externally calibrated parallel imaging technique for three-dimensional multispectral imaging (3D-MSI) in the presence of metallic implants. A fast, ultrashort echo time (UTE) calibration acquisition is proposed to enable externally calibrated parallel imaging techniques near metallic implants. The proposed calibration acquisition uses a broadband radiofrequency (RF) pulse to excite the off-resonance induced by the metallic implant, fully phase-encoded imaging to prevent in-plane distortions, and UTE to capture rapidly decaying signal. The performance of the externally calibrated parallel imaging reconstructions was assessed using phantoms and in vivo examples. Phantom and in vivo comparisons to self-calibrated parallel imaging acquisitions show that significant reductions in acquisition times can be achieved using externally calibrated parallel imaging with comparable image quality. Acquisition time reductions are particularly large for fully phase-encoded methods such as spectrally resolved fully phase-encoded three-dimensional (3D) fast spin-echo (SR-FPE), in which scan time reductions of up to 8 min were obtained. A fully phase-encoded acquisition with broadband excitation and UTE enabled externally calibrated parallel imaging for 3D-MSI, eliminating the need for repeated calibration regions at each frequency offset. Significant reductions in acquisition time can be achieved, particularly for fully phase-encoded methods like SR-FPE. Magn Reson Med 77:2303-2309, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  12. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    Science.gov (United States)

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  13. A joint estimation detection of Glaucoma progression in 3D spectral domain optical coherence tomography optic nerve head images

    Science.gov (United States)

    Belghith, Akram; Bowd, Christopher; Weinreb, Robert N.; Zangwill, Linda M.

    2014-03-01

    Glaucoma is an ocular disease characterized by distinctive changes in the optic nerve head (ONH) and visual field. Glaucoma can strike without symptoms and causes blindness if it remains without treatment. Therefore, early disease detection is important so that treatment can be initiated and blindness prevented. In this context, important advances in technology for non-invasive imaging of the eye have been made providing quantitative tools to measure structural changes in ONH topography, an essential element for glaucoma detection and monitoring. 3D spectral domain optical coherence tomography (SD-OCT), an optical imaging technique, has been commonly used to discriminate glaucomatous from healthy subjects. In this paper, we present a new framework for detection of glaucoma progression using 3D SD-OCT images. In contrast to previous works that the retinal nerve fiber layer (RNFL) thickness measurement provided by commercially available spectral-domain optical coherence tomograph, we consider the whole 3D volume for change detection. To integrate a priori knowledge and in particular the spatial voxel dependency in the change detection map, we propose the use of the Markov Random Field to handle a such dependency. To accommodate the presence of false positive detection, the estimated change detection map is then used to classify a 3D SDOCT image into the "non-progressing" and "progressing" glaucoma classes, based on a fuzzy logic classifier. We compared the diagnostic performance of the proposed framework to existing methods of progression detection.

  14. A 3D Image Filter for Parameter-Free Segmentation of Macromolecular Structures from Electron Tomograms

    Science.gov (United States)

    Ali, Rubbiya A.; Landsberg, Michael J.; Knauth, Emily; Morgan, Garry P.; Marsh, Brad J.; Hankamer, Ben

    2012-01-01

    3D image reconstruction of large cellular volumes by electron tomography (ET) at high (≤5 nm) resolution can now routinely resolve organellar and compartmental membrane structures, protein coats, cytoskeletal filaments, and macromolecules. However, current image analysis methods for identifying in situ macromolecular structures within the crowded 3D ultrastructural landscape of a cell remain labor-intensive, time-consuming, and prone to user-bias and/or error. This paper demonstrates the development and application of a parameter-free, 3D implementation of the bilateral edge-detection (BLE) algorithm for the rapid and accurate segmentation of cellular tomograms. The performance of the 3D BLE filter has been tested on a range of synthetic and real biological data sets and validated against current leading filters—the pseudo 3D recursive and Canny filters. The performance of the 3D BLE filter was found to be comparable to or better than that of both the 3D recursive and Canny filters while offering the significant advantage that it requires no parameter input or optimisation. Edge widths as little as 2 pixels are reproducibly detected with signal intensity and grey scale values as low as 0.72% above the mean of the background noise. The 3D BLE thus provides an efficient method for the automated segmentation of complex cellular structures across multiple scales for further downstream processing, such as cellular annotation and sub-tomogram averaging, and provides a valuable tool for the accurate and high-throughput identification and annotation of 3D structural complexity at the subcellular level, as well as for mapping the spatial and temporal rearrangement of macromolecular assemblies in situ within cellular tomograms. PMID:22479430

  15. Parametric modelling and segmentation of vertebral bodies in 3D CT and MR spine images

    International Nuclear Information System (INIS)

    Štern, Darko; Likar, Boštjan; Pernuš, Franjo; Vrtovec, Tomaž

    2011-01-01

    Accurate and objective evaluation of vertebral deformations is of significant importance in clinical diagnostics and therapy of pathological conditions affecting the spine. Although modern clinical practice is focused on three-dimensional (3D) computed tomography (CT) and magnetic resonance (MR) imaging techniques, the established methods for evaluation of vertebral deformations are limited to measuring deformations in two-dimensional (2D) x-ray images. In this paper, we propose a method for quantitative description of vertebral body deformations by efficient modelling and segmentation of vertebral bodies in 3D. The deformations are evaluated from the parameters of a 3D superquadric model, which is initialized as an elliptical cylinder and then gradually deformed by introducing transformations that yield a more detailed representation of the vertebral body shape. After modelling the vertebral body shape with 25 clinically meaningful parameters and the vertebral body pose with six rigid body parameters, the 3D model is aligned to the observed vertebral body in the 3D image. The performance of the method was evaluated on 75 vertebrae from CT and 75 vertebrae from T 2 -weighted MR spine images, extracted from the thoracolumbar part of normal and pathological spines. The results show that the proposed method can be used for 3D segmentation of vertebral bodies in CT and MR images, as the proposed 3D model is able to describe both normal and pathological vertebral body deformations. The method may therefore be used for initialization of whole vertebra segmentation or for quantitative measurement of vertebral body deformations.

  16. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    International Nuclear Information System (INIS)

    Chen, G; Pan, X; Stayman, J; Samei, E

    2014-01-01

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  17. 3D T2-weighted imaging to shorten multiparametric prostate MRI protocols.

    Science.gov (United States)

    Polanec, Stephan H; Lazar, Mathias; Wengert, Georg J; Bickel, Hubert; Spick, Claudio; Susani, Martin; Shariat, Shahrokh; Clauser, Paola; Baltzer, Pascal A T

    2018-04-01

    To determine whether 3D acquisitions provide equivalent image quality, lesion delineation quality and PI-RADS v2 performance compared to 2D acquisitions in T2-weighted imaging of the prostate at 3 T. This IRB-approved, prospective study included 150 consecutive patients (mean age 63.7 years, 35-84 years; mean PSA 7.2 ng/ml, 0.4-31.1 ng/ml). Two uroradiologists (R1, R2) independently rated image quality and lesion delineation quality using a five-point ordinal scale and assigned a PI-RADS score for 2D and 3D T2-weighted image data sets. Data were compared using visual grading characteristics (VGC) and receiver operating characteristics (ROC)/area under the curve (AUC) analysis. Image quality was similarly good to excellent for 2D T2w (mean score R1, 4.3 ± 0.81; R2, 4.7 ± 0.83) and 3D T2w (mean score R1, 4.3 ± 0.82; R2, 4.7 ± 0.69), p = 0.269. Lesion delineation was rated good to