WorldWideScience

Sample records for 3d imaging system

  1. 3D Backscatter Imaging System

    Science.gov (United States)

    Turner, D. Clark (Inventor); Whitaker, Ross (Inventor)

    2016-01-01

    Systems and methods for imaging an object using backscattered radiation are described. The imaging system comprises both a radiation source for irradiating an object that is rotationally movable about the object, and a detector for detecting backscattered radiation from the object that can be disposed on substantially the same side of the object as the source and which can be rotationally movable about the object. The detector can be separated into multiple detector segments with each segment having a single line of sight projection through the object and so detects radiation along that line of sight. Thus, each detector segment can isolate the desired component of the backscattered radiation. By moving independently of each other about the object, the source and detector can collect multiple images of the object at different angles of rotation and generate a three dimensional reconstruction of the object. Other embodiments are described.

  2. Miniaturized 3D microscope imaging system

    Science.gov (United States)

    Lan, Yung-Sung; Chang, Chir-Weei; Sung, Hsin-Yueh; Wang, Yen-Chang; Chang, Cheng-Yi

    2015-05-01

    We designed and assembled a portable 3-D miniature microscopic image system with the size of 35x35x105 mm3 . By integrating a microlens array (MLA) into the optical train of a handheld microscope, the biological specimen's image will be captured for ease of use in a single shot. With the light field raw data and program, the focal plane can be changed digitally and the 3-D image can be reconstructed after the image was taken. To localize an object in a 3-D volume, an automated data analysis algorithm to precisely distinguish profundity position is needed. The ability to create focal stacks from a single image allows moving or specimens to be recorded. Applying light field microscope algorithm to these focal stacks, a set of cross sections will be produced, which can be visualized using 3-D rendering. Furthermore, we have developed a series of design rules in order to enhance the pixel using efficiency and reduce the crosstalk between each microlens for obtain good image quality. In this paper, we demonstrate a handheld light field microscope (HLFM) to distinguish two different color fluorescence particles separated by a cover glass in a 600um range, show its focal stacks, and 3-D position.

  3. Glasses-free 3D viewing systems for medical imaging

    Science.gov (United States)

    Magalhães, Daniel S. F.; Serra, Rolando L.; Vannucci, André L.; Moreno, Alfredo B.; Li, Li M.

    2012-04-01

    In this work we show two different glasses-free 3D viewing systems for medical imaging: a stereoscopic system that employs a vertically dispersive holographic screen (VDHS) and a multi-autostereoscopic system, both used to produce 3D MRI/CT images. We describe how to obtain a VDHS in holographic plates optimized for this application, with field of view of 7 cm to each eye and focal length of 25 cm, showing images done with the system. We also describe a multi-autostereoscopic system, presenting how it can generate 3D medical imaging from viewpoints of a MRI or CT image, showing results of a 3D angioresonance image.

  4. A system for finding a 3D target without a 3D image

    Science.gov (United States)

    West, Jay B.; Maurer, Calvin R., Jr.

    2008-03-01

    We present here a framework for a system that tracks one or more 3D anatomical targets without the need for a preoperative 3D image. Multiple 2D projection images are taken using a tracked, calibrated fluoroscope. The user manually locates each target on each of the fluoroscopic views. A least-squares minimization algorithm triangulates the best-fit position of each target in the 3D space of the tracking system: using the known projection matrices from 3D space into image space, we use matrix minimization to find the 3D position that projects closest to the located target positions in the 2D images. A tracked endoscope, whose projection geometry has been pre-calibrated, is then introduced to the operating field. Because the position of the targets in the tracking space is known, a rendering of the targets may be projected onto the endoscope view, thus allowing the endoscope to be easily brought into the target vicinity even when the endoscope field of view is blocked, e.g. by blood or tissue. An example application for such a device is trauma surgery, e.g., removal of a foreign object. Time, scheduling considerations and concern about excessive radiation exposure may prohibit the acquisition of a 3D image, such as a CT scan, which is required for traditional image guidance systems; it is however advantageous to have 3D information about the target locations available, which is not possible using fluoroscopic guidance alone.

  5. A 3D surface imaging system for assessing human obesity

    Science.gov (United States)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  6. Image-Based 3D Face Modeling System

    Directory of Open Access Journals (Sweden)

    Vladimir Vezhnevets

    2005-08-01

    Full Text Available This paper describes an automatic system for 3D face modeling using frontal and profile images taken by an ordinary digital camera. The system consists of four subsystems including frontal feature detection, profile feature detection, shape deformation, and texture generation modules. The frontal and profile feature detection modules automatically extract the facial parts such as the eye, nose, mouth, and ear. The shape deformation module utilizes the detected features to deform the generic head mesh model such that the deformed model coincides with the detected features. A texture is created by combining the facial textures augmented from the input images and the synthesized texture and mapped onto the deformed generic head model. This paper provides a practical system for 3D face modeling, which is highly automated by aggregating, customizing, and optimizing a bunch of individual computer vision algorithms. The experimental results show a highly automated process of modeling, which is sufficiently robust to various imaging conditions. The whole model creation including all the optional manual corrections takes only 2∼3 minutes.

  7. 3D photoacoustic imaging

    Science.gov (United States)

    Carson, Jeffrey J. L.; Roumeliotis, Michael; Chaudhary, Govind; Stodilka, Robert Z.; Anastasio, Mark A.

    2010-06-01

    Our group has concentrated on development of a 3D photoacoustic imaging system for biomedical imaging research. The technology employs a sparse parallel detection scheme and specialized reconstruction software to obtain 3D optical images using a single laser pulse. With the technology we have been able to capture 3D movies of translating point targets and rotating line targets. The current limitation of our 3D photoacoustic imaging approach is its inability ability to reconstruct complex objects in the field of view. This is primarily due to the relatively small number of projections used to reconstruct objects. However, in many photoacoustic imaging situations, only a few objects may be present in the field of view and these objects may have very high contrast compared to background. That is, the objects have sparse properties. Therefore, our work had two objectives: (i) to utilize mathematical tools to evaluate 3D photoacoustic imaging performance, and (ii) to test image reconstruction algorithms that prefer sparseness in the reconstructed images. Our approach was to utilize singular value decomposition techniques to study the imaging operator of the system and evaluate the complexity of objects that could potentially be reconstructed. We also compared the performance of two image reconstruction algorithms (algebraic reconstruction and l1-norm techniques) at reconstructing objects of increasing sparseness. We observed that for a 15-element detection scheme, the number of measureable singular vectors representative of the imaging operator was consistent with the demonstrated ability to reconstruct point and line targets in the field of view. We also observed that the l1-norm reconstruction technique, which is known to prefer sparseness in reconstructed images, was superior to the algebraic reconstruction technique. Based on these findings, we concluded (i) that singular value decomposition of the imaging operator provides valuable insight into the capabilities of

  8. Autostereoscopic 3D visualization and image processing system for neurosurgery.

    Science.gov (United States)

    Meyer, Tobias; Kuß, Julia; Uhlemann, Falk; Wagner, Stefan; Kirsch, Matthias; Sobottka, Stephan B; Steinmeier, Ralf; Schackert, Gabriele; Morgenstern, Ute

    2013-06-01

    A demonstrator system for planning neurosurgical procedures was developed based on commercial hardware and software. The system combines an easy-to-use environment for surgical planning with high-end visualization and the opportunity to analyze data sets for research purposes. The demonstrator system is based on the software AMIRA. Specific algorithms for segmentation, elastic registration, and visualization have been implemented and adapted to the clinical workflow. Modules from AMIRA and the image processing library Insight Segmentation and Registration Toolkit (ITK) can be combined to solve various image processing tasks. Customized modules tailored to specific clinical problems can easily be implemented using the AMIRA application programming interface and a self-developed framework for ITK filters. Visualization is done via autostereoscopic displays, which provide a 3D impression without viewing aids. A Spaceball device allows a comfortable, intuitive way of navigation in the data sets. Via an interface to a neurosurgical navigation system, the demonstrator system can be used intraoperatively. The precision, applicability, and benefit of the demonstrator system for planning of neurosurgical interventions and for neurosurgical research were successfully evaluated by neurosurgeons using phantom and patient data sets.

  9. 3-D Imaging Systems for Agricultural Applications—A Review

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W.; Reiser, David; Paraforos, Dimitris S.

    2016-01-01

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture. PMID:27136560

  10. 3-D Imaging Systems for Agricultural Applications-A Review.

    Science.gov (United States)

    Vázquez-Arellano, Manuel; Griepentrog, Hans W; Reiser, David; Paraforos, Dimitris S

    2016-04-29

    Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D) sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  11. 3-D Imaging Systems for Agricultural Applications—A Review

    Directory of Open Access Journals (Sweden)

    Manuel Vázquez-Arellano

    2016-04-01

    Full Text Available Efficiency increase of resources through automation of agriculture requires more information about the production process, as well as process and machinery status. Sensors are necessary for monitoring the status and condition of production by recognizing the surrounding structures such as objects, field structures, natural or artificial markers, and obstacles. Currently, three dimensional (3-D sensors are economically affordable and technologically advanced to a great extent, so a breakthrough is already possible if enough research projects are commercialized. The aim of this review paper is to investigate the state-of-the-art of 3-D vision systems in agriculture, and the role and value that only 3-D data can have to provide information about environmental structures based on the recent progress in optical 3-D sensors. The structure of this research consists of an overview of the different optical 3-D vision techniques, based on the basic principles. Afterwards, their application in agriculture are reviewed. The main focus lays on vehicle navigation, and crop and animal husbandry. The depth dimension brought by 3-D sensors provides key information that greatly facilitates the implementation of automation and robotics in agriculture.

  12. Intensity-based image registration for 3D spatial compounding using a freehand 3D ultrasound system

    Science.gov (United States)

    Pagoulatos, Niko; Haynor, David R.; Kim, Yongmin

    2002-04-01

    3D spatial compounding involves the combination of two or more 3D ultrasound (US) data sets, acquired under different insonation angles and windows, to form a higher quality 3D US data set. An important requirement for this method to succeed is the accurate registration between the US images used to form the final compounded image. We have developed a new automatic method for rigid and deformable registration of 3D US data sets, acquired using a freehand 3D US system. Deformation is provided by using a 3D thin-plate spline (TPS). Our method is fundamentally different from the previous ones in that the acquired scattered US 2D slices are registered and compounded directly into the 3D US volume. Our approach has several benefits over the traditional registration and spatial compounding methods: (i) we only peform one 3D US reconstruction, for the first acquired data set, therefore we save the computation time required to reconstruct subsequent acquired scans, (ii) for our registration we use (except for the first scan) the acquired high-resolution 2D US images rather than the 3D US reconstruction data which are of lower quality due to the interpolation and potential subsampling associated with 3D reconstruction, and (iii) the scans performed after the first one are not required to follow the typical 3D US scanning protocol, where a large number of dense slices have to be acquired; slices can be acquired in any fashion in areas where compounding is desired. We show that by taking advantage of the similar information contained in adjacent acquired 2D US slices, we can reduce the computation time of linear and nonlinear registrations by a factor of more than 7:1, without compromising registration accuracy. Furthermore, we implemented an adaptive approximation to the 3D TPS with local bilinear transformations allowing additional reduction of the nonlinear registration computation time by a factor of approximately 3.5. Our results are based on a commercially available

  13. 3D fingerprint imaging system based on full-field fringe projection profilometry

    Science.gov (United States)

    Huang, Shujun; Zhang, Zonghua; Zhao, Yan; Dai, Jie; Chen, Chao; Xu, Yongjia; Zhang, E.; Xie, Lili

    2014-01-01

    As an unique, unchangeable and easily acquired biometrics, fingerprint has been widely studied in academics and applied in many fields over the years. The traditional fingerprint recognition methods are based on the obtained 2D feature of fingerprint. However, fingerprint is a 3D biological characteristic. The mapping from 3D to 2D loses 1D information and causes nonlinear distortion of the captured fingerprint. Therefore, it is becoming more and more important to obtain 3D fingerprint information for recognition. In this paper, a novel 3D fingerprint imaging system is presented based on fringe projection technique to obtain 3D features and the corresponding color texture information. A series of color sinusoidal fringe patterns with optimum three-fringe numbers are projected onto a finger surface. From another viewpoint, the fringe patterns are deformed by the finger surface and captured by a CCD camera. 3D shape data of the finger can be obtained from the captured fringe pattern images. This paper studies the prototype of the 3D fingerprint imaging system, including principle of 3D fingerprint acquisition, hardware design of the 3D imaging system, 3D calibration of the system, and software development. Some experiments are carried out by acquiring several 3D fingerprint data. The experimental results demonstrate the feasibility of the proposed 3D fingerprint imaging system.

  14. High definition 3D imaging lidar system using CCD

    Science.gov (United States)

    Jo, Sungeun; Kong, Hong Jin; Bang, Hyochoong

    2016-10-01

    In this study we propose and demonstrate a novel technique for measuring distance with high definition three-dimensional imaging. To meet the stringent requirements of various missions, spatial resolution and range precision are important properties for flash LIDAR systems. The proposed LIDAR system employs a polarization modulator and a CCD. When a laser pulse is emitted from the laser, it triggers the polarization modulator. The laser pulse is scattered by the target and is reflected back to the LIDAR system while the polarization modulator is rotating. Its polarization state is a function of time. The laser-return pulse passes through the polarization modulator in a certain polarization state, and the polarization state is calculated using the intensities of the laser pulses measured by the CCD. Because the function of the time and the polarization state is already known, the polarization state can be converted to time-of-flight. By adopting a polarization modulator and a CCD and only measuring the energy of a laser pulse to obtain range, a high resolution three-dimensional image can be acquired by the proposed three-dimensional imaging LIDAR system. Since this system only measures the energy of the laser pulse, a high bandwidth detector and a high resolution TDC are not required for high range precision. The proposed method is expected to be an alternative method for many three-dimensional imaging LIDAR system applications that require high resolution.

  15. A 3-D fluorescence imaging system incorporating structured illumination technology

    Science.gov (United States)

    Antos, L.; Emord, P.; Luquette, B.; McGee, B.; Nguyen, D.; Phipps, A.; Phillips, D.; Helguera, M.

    2010-02-01

    A currently available 2-D high-resolution, optical molecular imaging system was modified by the addition of a structured illumination source, OptigridTM, to investigate the feasibility of providing depth resolution along the optical axis. The modification involved the insertion of the OptigridTM and a lens in the path between the light source and the image plane, as well as control and signal processing software. Projection of the OptigridTM onto the imaging surface at an angle, was resolved applying the Scheimpflug principle. The illumination system implements modulation of the light source and provides a framework for capturing depth resolved mages. The system is capable of in-focus projection of the OptigridTM at different spatial frequencies, and supports the use of different lenses. A calibration process was developed for the system to achieve consistent phase shifts of the OptigridTM. Post-processing extracted depth information using depth modulation analysis using a phantom block with fluorescent sheets at different depths. An important aspect of this effort was that it was carried out by a multidisciplinary team of engineering and science students as part of a capstone senior design program. The disciplines represented are mechanical engineering, electrical engineering and imaging science. The project was sponsored by a financial grant from New York State with equipment support from two industrial concerns. The students were provided with a basic imaging concept and charged with developing, implementing, testing and validating a feasible proof-of-concept prototype system that was returned to the originator of the concept for further evaluation and characterization.

  16. Advanced 3-D Ultrasound Imaging

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer

    to produce high quality 3-D images. Because of the large matrix transducers with integrated custom electronics, these systems are extremely expensive. The relatively low price of ultrasound scanners is one of the factors for the widespread use of ultrasound imaging. The high price tag on the high quality 3-D......The main purpose of the PhD project was to develop methods that increase the 3-D ultrasound imaging quality available for the medical personnel in the clinic. Acquiring a 3-D volume gives the medical doctor the freedom to investigate the measured anatomy in any slice desirable after the scan has...... been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D imaging. A limiting factor has traditionally been the low image quality achievable using...

  17. 3D spectral imaging system for anterior chamber metrology

    Science.gov (United States)

    Anderson, Trevor; Segref, Armin; Frisken, Grant; Frisken, Steven

    2015-03-01

    Accurate metrology of the anterior chamber of the eye is useful for a number of diagnostic and clinical applications. In particular, accurate corneal topography and corneal thickness data is desirable for fitting contact lenses, screening for diseases and monitoring corneal changes. Anterior OCT systems can be used to measure anterior chamber surfaces, however accurate curvature measurements for single point scanning systems are known to be very sensitive to patient movement. To overcome this problem we have developed a parallel 3D spectral metrology system that captures simultaneous A-scans on a 2D lateral grid. This approach enables estimates of the elevation and curvature of anterior and posterior corneal surfaces that are robust to sample movement. Furthermore, multiple simultaneous surface measurements greatly improve the ability to register consecutive frames and enable aggregate measurements over a finer lateral grid. A key element of our approach has been to exploit standard low cost optical components including lenslet arrays and a 2D sensor to provide a path towards low cost implementation. We demonstrate first prototypes based on 6 Mpixel sensor using a 250 μm pitch lenslet array with 300 sample beams to achieve an RMS elevation accuracy of 1μm with 95 dB sensitivity and a 7.0 mm range. Initial tests on Porcine eyes, model eyes and calibration spheres demonstrate the validity of the concept. With the next iteration of designs we expect to be able to achieve over 1000 simultaneous A-scans in excess of 75 frames per second.

  18. D3D augmented reality imaging system: proof of concept in mammography.

    Science.gov (United States)

    Douglas, David B; Petricoin, Emanuel F; Liotta, Lance; Wilson, Eugene

    2016-01-01

    The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called "depth 3-dimensional (D3D) augmented reality". A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice.

  19. Air-touch interaction system for integral imaging 3D display

    Science.gov (United States)

    Dong, Han Yuan; Xiang, Lee Ming; Lee, Byung Gook

    2016-07-01

    In this paper, we propose an air-touch interaction system for the tabletop type integral imaging 3D display. This system consists of the real 3D image generation system based on integral imaging technique and the interaction device using a real-time finger detection interface. In this system, we used multi-layer B-spline surface approximation to detect the fingertip and gesture easily in less than 10cm height from the screen via input the hand image. The proposed system can be used in effective human computer interaction method for the tabletop type 3D display.

  20. Image quality of a cone beam O-arm 3D imaging system

    Science.gov (United States)

    Zhang, Jie; Weir, Victor; Lin, Jingying; Hsiung, Hsiang; Ritenour, E. Russell

    2009-02-01

    The O-arm is a cone beam imaging system designed primarily to support orthopedic surgery and is also used for image-guided and vascular surgery. Using a gantry that can be opened or closed, the O-arm can function as a 2-dimensional (2D) fluoroscopy device or collect 3-dimensional (3D) volumetric imaging data like a CT system. Clinical applications of the O-arm in spine surgical procedures, assessment of pedicle screw position, and kyphoplasty procedures show that the O-arm 3D mode provides enhanced imaging information compared to radiographs or fluoroscopy alone. In this study, the image quality of an O-arm system was quantitatively evaluated. A 20 cm diameter CATPHAN 424 phantom was scanned using the pre-programmed head protocols: small/medium (120 kVp, 100 mAs), large (120 kVp, 128 mAs), and extra-large (120 kVp, 160 mAs) in 3D mode. High resolution reconstruction mode (512×512×0.83 mm) was used to reconstruct images for the analysis of low and high contrast resolution, and noise power spectrum. MTF was measured using the point spread function. The results show that the O-arm image is uniform but with a noise pattern which cannot be removed by simply increasing the mAs. The high contrast resolution of the O-arm system was approximately 9 lp/cm. The system has a 10% MTF at 0.45 mm. The low-contrast resolution cannot be decided due to the noise pattern. For surgery where locations of a structure are emphasized over a survey of all image details, the image quality of the O-arm is well accepted clinically.

  1. 3D-TV System with Depth-Image-Based Rendering Architectures, Techniques and Challenges

    CERN Document Server

    Zhao, Yin; Yu, Lu; Tanimoto, Masayuki

    2013-01-01

    Riding on the success of 3D cinema blockbusters and advances in stereoscopic display technology, 3D video applications have gathered momentum in recent years. 3D-TV System with Depth-Image-Based Rendering: Architectures, Techniques and Challenges surveys depth-image-based 3D-TV systems, which are expected to be put into applications in the near future. Depth-image-based rendering (DIBR) significantly enhances the 3D visual experience compared to stereoscopic systems currently in use. DIBR techniques make it possible to generate additional viewpoints using 3D warping techniques to adjust the perceived depth of stereoscopic videos and provide for auto-stereoscopic displays that do not require glasses for viewing the 3D image.   The material includes a technical review and literature survey of components and complete systems, solutions for technical issues, and implementation of prototypes. The book is organized into four sections: System Overview, Content Generation, Data Compression and Transmission, and 3D V...

  2. Towards 3D ultrasound image based soft tissue tracking: a transrectal ultrasound prostate image alignment system

    CERN Document Server

    Baumann, Michael; Daanen, Vincent; Troccaz, Jocelyne

    2007-01-01

    The emergence of real-time 3D ultrasound (US) makes it possible to consider image-based tracking of subcutaneous soft tissue targets for computer guided diagnosis and therapy. We propose a 3D transrectal US based tracking system for precise prostate biopsy sample localisation. The aim is to improve sample distribution, to enable targeting of unsampled regions for repeated biopsies, and to make post-interventional quality controls possible. Since the patient is not immobilized, since the prostate is mobile and due to the fact that probe movements are only constrained by the rectum during biopsy acquisition, the tracking system must be able to estimate rigid transformations that are beyond the capture range of common image similarity measures. We propose a fast and robust multi-resolution attribute-vector registration approach that combines global and local optimization methods to solve this problem. Global optimization is performed on a probe movement model that reduces the dimensionality of the search space a...

  3. GOTHIC CHURCHES IN PARIS ST GERVAIS ET ST PROTAIS IMAGE MATCHING 3D RECONSTRUCTION TO UNDERSTAND THE VAULTS SYSTEM GEOMETRY

    Directory of Open Access Journals (Sweden)

    M. Capone

    2015-02-01

    benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  4. D3D augmented reality imaging system: proof of concept in mammography

    Directory of Open Access Journals (Sweden)

    Douglas DB

    2016-08-01

    Full Text Available David B Douglas,1 Emanuel F Petricoin,2 Lance Liotta,2 Eugene Wilson3 1Department of Radiology, Stanford University, Palo Alto, CA, 2Center for Applied Proteomics and Molecular Medicine, George Mason University, Manassas, VA, 3Department of Radiology, Fort Benning, Columbus, GA, USA Purpose: The purpose of this article is to present images from simulated breast microcalcifications and assess the pattern of the microcalcifications with a technical development called “depth 3-dimensional (D3D augmented reality”. Materials and methods: A computer, head display unit, joystick, D3D augmented reality software, and an in-house script of simulated data of breast microcalcifications in a ductal distribution were used. No patient data was used and no statistical analysis was performed. Results: The D3D augmented reality system demonstrated stereoscopic depth perception by presenting a unique image to each eye, focal point convergence, head position tracking, 3D cursor, and joystick fly-through. Conclusion: The D3D augmented reality imaging system offers image viewing with depth perception and focal point convergence. The D3D augmented reality system should be tested to determine its utility in clinical practice. Keywords: augmented reality, 3D medical imaging, radiology, depth perception

  5. Small SWAP 3D imaging flash ladar for small tactical unmanned air systems

    Science.gov (United States)

    Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.

    2015-05-01

    The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and <10 cm range accuracy through foliage in real-time. The 3D imaging flash ladar is designed for a STUAS with a complete system SWAP estimate of <9 kg, <0.2 m3 and <350 W power. The system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.

  6. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer.

    Science.gov (United States)

    Douglas, David B; Boone, John M; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice.

  7. Augmented Reality Imaging System: 3D Viewing of a Breast Cancer

    Science.gov (United States)

    Douglas, David B.; Boone, John M.; Petricoin, Emanuel; Liotta, Lance; Wilson, Eugene

    2016-01-01

    Objective To display images of breast cancer from a dedicated breast CT using Depth 3-Dimensional (D3D) augmented reality. Methods A case of breast cancer imaged using contrast-enhanced breast CT (Computed Tomography) was viewed with the augmented reality imaging, which uses a head display unit (HDU) and joystick control interface. Results The augmented reality system demonstrated 3D viewing of the breast mass with head position tracking, stereoscopic depth perception, focal point convergence and the use of a 3D cursor and joy-stick enabled fly through with visualization of the spiculations extending from the breast cancer. Conclusion The augmented reality system provided 3D visualization of the breast cancer with depth perception and visualization of the mass's spiculations. The augmented reality system should be further researched to determine the utility in clinical practice. PMID:27774517

  8. Development of goniophotometric imaging system for recording reflectance spectra of 3D objects

    Science.gov (United States)

    Tonsho, Kazutaka; Akao, Y.; Tsumura, Norimichi; Miyake, Yoichi

    2001-12-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and internet or virtual museum via World Wide Web. To achieve our goal, we have developed gonio-photometric imaging system by using high accurate multi-spectral camera and 3D digitizer. In this paper, gonio-photometric imaging method is introduced for recording 3D object. 5-bands images of the object are taken under 7 different illuminants angles. The 5-band image sequences are then analyzed on the basis of both dichromatic reflection model and Phong model to extract gonio-photometric property of the object. The images of the 3D object under illuminants with arbitrary spectral radiant distribution, illuminating angles, and visual points are rendered by using OpenGL with the 3D shape and gonio-photometric property.

  9. QWIP focal plane array theoretical model of 3-D imaging LADAR system

    OpenAIRE

    El Mashade, Mohamed Bakry; AbouElez, Ahmed Elsayed

    2016-01-01

    The aim of this research is to develop a model for the direct detection three-dimensional (3-D) imaging LADAR system using Quantum Well Infrared Photodetector (QWIP) Focal Plane Array (FPA). This model is employed to study how to add 3-D imaging capability to the existing conventional thermal imaging systems of the same basic form which is sensitive to 3–5 mm (mid-wavelength infrared, MWIR) or 8–12 mm (long-wavelength infrared, LWIR) spectral bands. The integrated signal photoelectrons in cas...

  10. A web-based 3D medical image collaborative processing system with videoconference

    Science.gov (United States)

    Luo, Sanbi; Han, Jun; Huang, Yonggang

    2013-07-01

    Three dimension medical images have been playing an irreplaceable role in realms of medical treatment, teaching, and research. However, collaborative processing and visualization of 3D medical images on Internet is still one of the biggest challenges to support these activities. Consequently, we present a new application approach for web-based synchronized collaborative processing and visualization of 3D medical Images. Meanwhile, a web-based videoconference function is provided to enhance the performance of the whole system. All the functions of the system can be available with common Web-browsers conveniently, without any extra requirement of client installation. In the end, this paper evaluates the prototype system using 3D medical data sets, which demonstrates the good performance of our system.

  11. Analysis, Modeling and Dynamic Optimization of 3D Time-of-Flight Imaging Systems

    OpenAIRE

    Schmidt, Mirko

    2011-01-01

    The present thesis is concerned with the optimization of 3D Time-of-Flight (ToF) imaging systems. These novel cameras determine range images by actively illuminating a scene and measuring the time until the backscattered light is detected. Depth maps are constructed from multiple raw images. Usually two of such raw images are acquired simultaneously using special correlating sensors. This thesis covers four main contributions: A physical sensor model is presented which enables the analysis a...

  12. Basic theory on surface measurement uncertainty of 3D imaging systems

    Science.gov (United States)

    Beraldin, J. Angelo

    2009-01-01

    Three-dimensional (3D) imaging systems are now widely available, but standards, best practices and comparative data have started to appear only in the last 10 years or so. The need for standards is mainly driven by users and product developers who are concerned with 1) the applicability of a given system to the task at hand (fit-for-purpose), 2) the ability to fairly compare across instruments, 3) instrument warranty issues, 4) costs savings through 3D imaging. The evaluation and characterization of 3D imaging sensors and algorithms require the definition of metric performance. The performance of a system is usually evaluated using quality parameters such as spatial resolution/uncertainty/accuracy and complexity. These are quality parameters that most people in the field can agree upon. The difficulty arises from defining a common terminology and procedures to quantitatively evaluate them though metrology and standards definitions. This paper reviews the basic principles of 3D imaging systems. Optical triangulation and time delay (timeof- flight) measurement systems were selected to explain the theoretical and experimental strands adopted in this paper. The intrinsic uncertainty of optical distance measurement techniques, the parameterization of a 3D surface and systematic errors are covered. Experimental results on a number of scanners (Surphaser®, HDS6000®, Callidus CPW 8000®, ShapeGrabber® 102) support the theoretical descriptions.

  13. Structured light 3D tracking system for measuring motions in PET brain imaging

    DEFF Research Database (Denmark)

    Olesen, Oline Vinter; Jørgensen, Morten Rudkjær; Paulsen, Rasmus Reinhold

    2010-01-01

    with a DLP projector and a CCD camera is set up on a model of the High Resolution Research Tomograph (HRRT). Methods to reconstruct 3D point clouds of simple surfaces based on phase-shifting interferometry (PSI) are demonstrated. The projector and camera are calibrated using a simple stereo vision procedure......Patient motion during scanning deteriorates image quality, especially for high resolution PET scanners. A new proposal for a 3D head tracking system for motion correction in high resolution PET brain imaging is set up and demonstrated. A prototype tracking system based on structured light...

  14. An Object-Oriented Simulator for 3D Digital Breast Tomosynthesis Imaging System

    Directory of Open Access Journals (Sweden)

    Saeed Seyyedi

    2013-01-01

    Full Text Available Digital breast tomosynthesis (DBT is an innovative imaging modality that provides 3D reconstructed images of breast to detect the breast cancer. Projections obtained with an X-ray source moving in a limited angle interval are used to reconstruct 3D image of breast. Several reconstruction algorithms are available for DBT imaging. Filtered back projection algorithm has traditionally been used to reconstruct images from projections. Iterative reconstruction algorithms such as algebraic reconstruction technique (ART were later developed. Recently, compressed sensing based methods have been proposed in tomosynthesis imaging problem. We have developed an object-oriented simulator for 3D digital breast tomosynthesis (DBT imaging system using C++ programming language. The simulator is capable of implementing different iterative and compressed sensing based reconstruction methods on 3D digital tomosynthesis data sets and phantom models. A user friendly graphical user interface (GUI helps users to select and run the desired methods on the designed phantom models or real data sets. The simulator has been tested on a phantom study that simulates breast tomosynthesis imaging problem. Results obtained with various methods including algebraic reconstruction technique (ART and total variation regularized reconstruction techniques (ART+TV are presented. Reconstruction results of the methods are compared both visually and quantitatively by evaluating performances of the methods using mean structural similarity (MSSIM values.

  15. 3D digital image correlation using single color camera pseudo-stereo system

    Science.gov (United States)

    Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang

    2017-10-01

    Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.

  16. 3D object-oriented image analysis in 3D geophysical modelling: Analysing the central part of the East African Rift System

    Science.gov (United States)

    Fadel, I.; van der Meijde, M.; Kerle, N.; Lauritsen, N.

    2015-03-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D interactive modelling environment IGMAS+, and their density contrast values were calculated using an object-based inversion technique to calculate the forward signal of the objects and compare it with the measured satellite gravity. Thus, a new object-based approach was implemented to interpret and extract the 3D subsurface objects from 3D geophysical data. We also introduce a new approach to constrain the interpretation of the satellite gravity measurements that can be applied using any 3D geophysical model.

  17. 3D optical sectioning with a new hyperspectral confocal fluorescence imaging system.

    Energy Technology Data Exchange (ETDEWEB)

    Nieman, Linda T.; Sinclair, Michael B.; Davidson, George S.; Van Benthem, Mark Hilary; Haaland, David Michael; Timlin, Jerilyn Ann; Sasaki, Darryl Yoshio; Bachand, George David; Jones, Howland D. T.

    2007-02-01

    A novel hyperspectral fluorescence microscope for high-resolution 3D optical sectioning of cells and other structures has been designed, constructed, and used to investigate a number of different problems. We have significantly extended new multivariate curve resolution (MCR) data analysis methods to deconvolve the hyperspectral image data and to rapidly extract quantitative 3D concentration distribution maps of all emitting species. The imaging system has many advantages over current confocal imaging systems including simultaneous monitoring of numerous highly overlapped fluorophores, immunity to autofluorescence or impurity fluorescence, enhanced sensitivity, and dramatically improved accuracy, reliability, and dynamic range. Efficient data compression in the spectral dimension has allowed personal computers to perform quantitative analysis of hyperspectral images of large size without loss of image quality. We have also developed and tested software to perform analysis of time resolved hyperspectral images using trilinear multivariate analysis methods. The new imaging system is an enabling technology for numerous applications including (1) 3D composition mapping analysis of multicomponent processes occurring during host-pathogen interactions, (2) monitoring microfluidic processes, (3) imaging of molecular motors and (4) understanding photosynthetic processes in wild type and mutant Synechocystis cyanobacteria.

  18. Precision-guided surgical navigation system using laser guidance and 3D autostereoscopic image overlay.

    Science.gov (United States)

    Liao, Hongen; Ishihara, Hirotaka; Tran, Huy Hoang; Masamune, Ken; Sakuma, Ichiro; Dohi, Takeyoshi

    2010-01-01

    This paper describes a precision-guided surgical navigation system for minimally invasive surgery. The system combines a laser guidance technique with a three-dimensional (3D) autostereoscopic image overlay technique. Images of surgical anatomic structures superimposed onto the patient are created by employing an animated imaging method called integral videography (IV), which can display geometrically accurate 3D autostereoscopic images and reproduce motion parallax without the need for special viewing or tracking devices. To improve the placement accuracy of surgical instruments, we integrated an image overlay system with a laser guidance system for alignment of the surgical instrument and better visualization of patient's internal structure. We fabricated a laser guidance device and mounted it on an IV image overlay device. Experimental evaluations showed that the system could guide a linear surgical instrument toward a target with an average error of 2.48 mm and standard deviation of 1.76 mm. Further improvement to the design of the laser guidance device and the patient-image registration procedure of the IV image overlay will make this system practical; its use would increase surgical accuracy and reduce invasiveness.

  19. Fusing Multiscale Charts into 3D ENC Systems Based on Underwater Topography and Remote Sensing Image

    Directory of Open Access Journals (Sweden)

    Tao Liu

    2015-01-01

    Full Text Available The purpose of this study is to propose an approach to fuse multiscale charts into three-dimensional (3D electronic navigational chart (ENC systems based on underwater topography and remote sensing image. This is the first time that the fusion of multiscale standard ENCs in the 3D ENC system has been studied. First, a view-dependent visualization technology is presented for the determination of the display condition of a chart. Second, a map sheet processing method is described for dealing with the map sheet splice problem. A process order called “3D order” is designed to adapt to the characteristics of the chart. A map sheet clipping process is described to deal with the overlap between the adjacent map sheets. And our strategy for map sheet splice is proposed. Third, the rendering method for ENC objects in the 3D ENC system is introduced. Fourth, our picking-up method for ENC objects is proposed. Finally, we implement the above methods in our system: automotive intelligent chart (AIC 3D electronic chart display and information systems (ECDIS. And our method can handle the fusion problem well.

  20. Single Camera 3-D Coordinate Measuring System Based on Optical Probe Imaging

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    A new vision coordinate measuring system——single camera 3-D coordinate measuring system based on optical probe imaging is presented. A new idea in vision coordinate measurement is proposed. A linear model is deduced which can distinguish six freedom degrees of optical probe to realize coordinate measurement of the object surface. The effects of some factors on the resolution of the system are analyzed. The simulating experiments have shown that the system model is available.

  1. In vivo validation of a 3D ultrasound system for imaging the lateral ventricles of neonates

    Science.gov (United States)

    Kishimoto, J.; Fenster, A.; Chen, N.; Lee, D.; de Ribaupierre, S.

    2014-03-01

    Dilated lateral ventricles in neonates can be due to many different causes, such as brain loss, or congenital malformation; however, the main cause is hydrocephalus, which is the accumulation of fluid within the ventricular system. Hydrocephalus can raise intracranial pressure resulting in secondary brain damage, and up to 25% of patients with severely enlarged ventricles have epilepsy in later life. Ventricle enlargement is clinically monitored using 2D US through the fontanels. The sensitivity of 2D US to dilation is poor because it cannot provide accurate measurements of irregular volumes such as the ventricles, so most clinical evaluations are of a qualitative nature. We developed a 3D US system to image the cerebral ventricles of neonates within the confines of incubators that can be easily translated to more open environments. Ventricle volumes can be segmented from these images giving a quantitative volumetric measurement of ventricle enlargement without moving the patient into an imaging facility. In this paper, we report on in vivo validation studies: 1) comparing 3D US ventricle volumes before and after clinically necessary interventions removing CSF, and 2) comparing 3D US ventricle volumes to those from MRI. Post-intervention ventricle volumes were less than pre-intervention measurements for all patients and all interventions. We found high correlations (R = 0.97) between the difference in ventricle volume and the reported removed CSF with the slope not significantly different than 1 (p < 0.05). Comparisons between ventricle volumes from 3D US and MR images taken 4 (±3.8) days of each other did not show significant difference (p=0.44) between 3D US and MRI through paired t-test.

  2. A PC-based 3D imaging system: algorithms, software, and hardware considerations.

    Science.gov (United States)

    Raya, S P; Udupa, J K; Barrett, W A

    1990-01-01

    Three-dimensional (3D) imaging in medicine is known to produce easily and quickly derivable medically relevant information, especially in complex situations. We intend to demonstrate in this paper, that with an appropriate choice of approaches and a proper design of algorithms and software, it is possible to develop a low-cost 3D imaging system that can provide a level of performance sufficient to meet the daily case load in an individual or even group-practice situation. We describe hardware considerations of a generic system and give an example of a specific system we used for our implementation. Given a 3D image as a stack of slices, we generate a packed binary cubic voxel array, by combining segmentation (density thresholding), interpolation, and packing in an efficient way. Since threshold-based segmentation is very often not perfect, object-like structures and noise clutter the binary scene. We utilize an effective mechanism to isolate the object from this clutter by tracking a specified, connected surface of the object. The surface description thus obtained is rendered to create a depiction of the surface on a 2D display screen. Efficient implementation of hidden-part removal and image-space shading and a simple and fast antialiasing technique provide a level of performance which otherwise would not have been possible in a PC environment. We outline our software emphasizing some design aspects and present some clinical examples.

  3. 3D Image Acquisition System Based on Shape from Focus Technique

    Directory of Open Access Journals (Sweden)

    Pierre Gouton

    2013-04-01

    Full Text Available This paper describes the design of a 3D image acquisition system dedicated to natural complex scenes composed of randomly distributed objects with spatial discontinuities. In agronomic sciences, the 3D acquisition of natural scene is difficult due to the complex nature of the scenes. Our system is based on the Shape from Focus technique initially used in the microscopic domain. We propose to adapt this technique to the macroscopic domain and we detail the system as well as the image processing used to perform such technique. The Shape from Focus technique is a monocular and passive 3D acquisition method that resolves the occlusion problem affecting the multi-cameras systems. Indeed, this problem occurs frequently in natural complex scenes like agronomic scenes. The depth information is obtained by acting on optical parameters and mainly the depth of field. A focus measure is applied on a 2D image stack previously acquired by the system. When this focus measure is performed, we can create the depth map of the scene.

  4. Terahertz imaging system based on bessel beams via 3D printed axicons at 100GHz

    Science.gov (United States)

    Liu, Changming; Wei, Xuli; Zhang, Zhongqi; Wang, Kejia; Yang, Zhenggang; Liu, Jinsong

    2014-11-01

    Terahertz (THz) imaging technology shows great advantage in nondestructive detection (NDT), since many optical opaque materials are transparent to THz waves. In this paper, we design and fabricate dielectric axicons to generate zeroth order-Bessel beams by 3D printing technology. We further present an all-electric THz imaging system using the generated Bessel beams in 100GHz. Resolution targets made of printed circuit board are imaged, and the results clearly show the extended depth of focus of Bessel beam, indicating the promise of Bessel beam for the THz NDT.

  5. Scalable, high-performance 3D imaging software platform: system architecture and application to virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingly available and affordable commodity computing systems: multi-core, clusters, and cloud computing systems. To achieve scalable, high-performance computing, our platform (1) employs size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D image processing algorithms; (2) supports task scheduling for efficient load distribution and balancing; and (3) consists of a layered parallel software libraries that allow a wide range of medical applications to share the same functionalities. We evaluated the performance of our platform by applying it to an electronic cleansing system in virtual colonoscopy, with initial experimental results showing a 10 times performance improvement on an 8-core workstation over the original sequential implementation of the system.

  6. An Integrated System for Feature Evaluation of 3D Images of a Tissue Specimen

    Directory of Open Access Journals (Sweden)

    P.S. Umesh Adiga

    2002-01-01

    Full Text Available In this article we have proposed an integrated system for measurement of important features from 3D tissue images. We propose a segmentation technique, where we combine several methods to achieve a good degree of automation. Important histological and cytological three‐dimensional features and strategies to measure them are described. Figures can be viewed in colour on http://www.esacp.org/acp/2002/24‐23/adiga.htm

  7. The EOS 2D/3D X-ray imaging system

    OpenAIRE

    Faria, Rita; McKenna, Claire; Wade, Rosalind Fay; Yang, Huiqin; Woolacott, Nerys; Sculpher, Mark

    2013-01-01

    OBJECTIVES: To evaluate the cost-effectiveness of the EOS® 2D/3D X-ray imaging system compared with standard X-ray for the diagnosis and monitoring of orthopaedic conditions. MATERIALS AND METHODS: A decision analytic model was developed to quantify the long-term costs and health outcomes, expressed as quality-adjusted life years (QALYs) from the UK health service perspective. Input parameters were obtained from medical literature, previously developed cancer models and expert advice. Thresho...

  8. [A 3D-ultrasound imaging system based on back-end scanning mode].

    Science.gov (United States)

    Qi, Jian; Chen, Yimin; Ding, Mingyue; Wei, Chiming

    2012-07-01

    A new scanning mode is proposed that the front-end of the probe is fixed, while the back-end makes fan-shaped, scanning movement. The new scanning mode avoided ribs drawbacks successfully. Based on the new scanning mode a 3D-Ultrasound Images System is accomplished to acquire 2D data of fetusfetus fetusfetus phantom and livers and kidneys, to demonstrates the effectiveness of the new scanning mode.

  9. 3-D ultrasonic strain imaging based on a linear scanning system.

    Science.gov (United States)

    Huang, Qinghua; Xie, Bo; Ye, Pengfei; Chen, Zhaohong

    2015-02-01

    This paper introduces a 3-D strain imaging method based on a freehand linear scanning mode. We designed a linear sliding track with a position sensor and a height-adjustable holder to constrain the movement of an ultrasound probe in a freehand manner. When moving the probe along the sliding track, the corresponding positional measures for the probe are transmitted via a wireless communication module based on Bluetooth in real time. In a single examination, the probe is scanned in two sweeps in which the height of the probe is adjusted by the holder to collect the pre- and postcompression radio-frequency echoes, respectively. To generate a 3-D strain image, a volume cubic in which the voxels denote relative strains for tissues is defined according to the range of the two sweeps. With respect to the post-compression frames, several slices in the volume are determined and the pre-compression frames are re-sampled to precisely correspond to the post-compression frames. Thereby, a strain estimation method based on minimizing a cost function using dynamic programming is used to obtain the 2-D strain image for each pair of frames from the re-sampled pre-compression sweep and the post-compression sweep, respectively. A software system is developed for volume reconstruction, visualization, and measurement of the 3-D strain images. The experimental results show that high-quality 3-D strain images of phantom and human tissues can be generated by the proposed method, indicating that the proposed system can be applied for real clinical applications (e.g., musculoskeletal assessments).

  10. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    Science.gov (United States)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  11. A small animal image guided irradiation system study using 3D dosimeters

    Science.gov (United States)

    Qian, Xin; Admovics, John; Wuu, Cheng-Shie

    2015-01-01

    In a high resolution image-guided small animal irradiation platform, a cone beam computed tomography (CBCT) is integrated with an irradiation unit for precise targeting. Precise quality assurance is essential for both imaging and irradiation components. The conventional commissioning techniques with films face major challenges due to alignment uncertainty and labour intensive film preparation and scanning. In addition, due to the novel design of this platform the mouse stage rotation for CBCT imaging is perpendicular to the gantry rotation for irradiation. Because these two rotations are associated with different mechanical systems, discrepancy between rotation isocenters exists. In order to deliver x-ray precisely, it is essential to verify coincidence of the imaging and the irradiation isocenters. A 3D PRESAGE dosimeter can provide an excellent tool for checking dosimetry and verifying coincidence of irradiation and imaging coordinates in one system. Dosimetric measurements were performed to obtain beam profiles and percent depth dose (PDD). Isocentricity and coincidence of the mouse stage and gantry rotations were evaluated with starshots acquired using PRESAGE dosimeters. A single PRESAGE dosimeter can provide 3 -D information in both geometric and dosimetric uncertainty, which is crucial for translational studies.

  12. Development of a Wireless and Near Real-Time 3D Ultrasound Strain Imaging System.

    Science.gov (United States)

    Chen, Zhaohong; Chen, Yongdong; Huang, Qinghua

    2016-04-01

    Ultrasound elastography is an important medical imaging tool for characterization of lesions. In this paper, we present a wireless and near real-time 3D ultrasound strain imaging system. It uses a 3D translating device to control a commercial linear ultrasound transducer to collect pre-compression and post-compression radio-frequency (RF) echo signal frames. The RF frames are wirelessly transferred to a high-performance server via a local area network (LAN). A dynamic programming strain estimation algorithm is implemented with the compute unified device architecture (CUDA) on the graphic processing unit (GPU) in the server to calculate the strain image after receiving a pre-compression RF frame and a post-compression RF frame at the same position. Each strain image is inserted into a strain volume which can be rendered in near real-time. We take full advantage of the translating device to precisely control the probe movement and compression. The GPU-based parallel computing techniques are designed to reduce the computation time. Phantom and in vivo experimental results demonstrate that our system can generate strain volumes with good quality and display an incrementally reconstructed volume image in near real-time.

  13. Image-based 3D scene analysis for navigation of autonomous airborne systems

    Science.gov (United States)

    Jaeger, Klaus; Bers, Karl-Heinz

    2001-10-01

    In this paper we describe a method for automatic determination of sensor pose (position and orientation) related to a 3D landmark or scene model. The method is based on geometrical matching of 2D image structures with projected elements of the associated 3D model. For structural image analysis and scene interpretation, a blackboard-based production system is used resulting in a symbolic description of image data. Knowledge of the approximated sensor pose measured for example by IMU or GPS enables to estimate an expected model projection used for solving the correspondence problem of image structures and model elements. These correspondences are presupposed for pose computation carried out by nonlinear numerical optimization algorithms. We demonstrate the efficiency of the proposed method by navigation update approaching a bridge scenario and flying over urban area, whereas data were taken with airborne infrared sensors in high oblique view. In doing so we simulated image-based navigation for target engagement and midcourse guidance suited for the concepts of future autonomous systems like missiles and drones.

  14. Noise analysis for near field 3-D FM-CW radar imaging systems

    Energy Technology Data Exchange (ETDEWEB)

    Sheen, David M.

    2015-06-19

    Near field radar imaging systems are used for several applications including concealed weapon detection in airports and other high-security venues. Despite the near-field operation, phase noise and thermal noise can limit the performance in several ways including reduction in system sensitivity and reduction of image dynamic range. In this paper, the effects of thermal noise, phase noise, and processing gain are analyzed in the context of a near field 3-D FM-CW imaging radar as might be used for concealed weapon detection. In addition to traditional frequency domain analysis, a time-domain simulation is employed to graphically demonstrate the effect of these noise sources on a fast-chirping FM-CW system.

  15. 3D-printed eagle eye: Compound microlens system for foveated imaging

    Science.gov (United States)

    Thiele, Simon; Arzenbacher, Kathrin; Gissibl, Timo; Giessen, Harald; Herkommer, Alois M.

    2017-01-01

    We present a highly miniaturized camera, mimicking the natural vision of predators, by 3D-printing different multilens objectives directly onto a complementary metal-oxide semiconductor (CMOS) image sensor. Our system combines four printed doublet lenses with different focal lengths (equivalent to f = 31 to 123 mm for a 35-mm film) in a 2 × 2 arrangement to achieve a full field of view of 70° with an increasing angular resolution of up to 2 cycles/deg field of view in the center of the image. The footprint of the optics on the chip is below 300 μm × 300 μm, whereas their height is design iterations and can lead to a plethora of different miniaturized multiaperture imaging systems with applications in fields such as endoscopy, optical metrology, optical sensing, surveillance drones, or security. PMID:28246646

  16. Novel metrics and methodology for the characterisation of 3D imaging systems

    Science.gov (United States)

    Hodgson, John R.; Kinnell, Peter; Justham, Laura; Lohse, Niels; Jackson, Michael R.

    2017-04-01

    The modelling, benchmarking and selection process for non-contact 3D imaging systems relies on the ability to characterise their performance. Characterisation methods that require optically compliant artefacts such as matt white spheres or planes, fail to reveal the performance limitations of a 3D sensor as would be encountered when measuring a real world object with problematic surface finish. This paper reports a method of evaluating the performance of 3D imaging systems on surfaces of arbitrary isotropic surface finish, position and orientation. The method involves capturing point clouds from a set of samples in a range of surface orientations and distances from the sensor. Point clouds are processed to create a single performance chart per surface finish, which shows both if a point is likely to be recovered, and the expected point noise as a function of surface orientation and distance from the sensor. In this paper, the method is demonstrated by utilising a low cost pan-tilt table and an active stereo 3D camera. Its performance is characterised by the fraction and quality of recovered data points on aluminium isotropic surfaces ranging in roughness average (Ra) from 0.09 to 0.46 μm at angles of up to 55° relative to the sensor over a distances from 400 to 800 mm to the scanner. Results from a matt white surface similar to those used in previous characterisation methods contrast drastically with results from even the dullest aluminium sample tested, demonstrating the need to characterise sensors by their limitations, not just best case performance.

  17. System design for 3D wound imaging using low-cost mobile devices

    Science.gov (United States)

    Sirazitdinova, Ekaterina; Deserno, Thomas M.

    2017-03-01

    The state-of-the art method of wound assessment is a manual, imprecise and time-consuming procedure. Per- formed by clinicians, it has limited reproducibility and accuracy, large time consumption and high costs. Novel technologies such as laser scanning microscopy, multi-photon microscopy, optical coherence tomography and hyper-spectral imaging, as well as devices relying on the structured light sensors, make accurate wound assessment possible. However, such methods have limitations due to high costs and may lack portability and availability. In this paper, we present a low-cost wound assessment system and architecture for fast and accurate cutaneous wound assessment using inexpensive consumer smartphone devices. Computer vision techniques are applied either on the device or the server to reconstruct wounds in 3D as dense models, which are generated from images taken with a built-in single camera of a smartphone device. The system architecture includes imaging (smartphone), processing (smartphone or PACS) and storage (PACS) devices. It supports tracking over time by alignment of 3D models, color correction using a reference color card placed into the scene and automatic segmentation of wound regions. Using our system, we are able to detect and document quantitative characteristics of chronic wounds, including size, depth, volume, rate of healing, as well as qualitative characteristics as color, presence of necrosis and type of involved tissue.

  18. Automatic nipple detection on 3D images of an automated breast ultrasound system (ABUS)

    Science.gov (United States)

    Javanshir Moghaddam, Mandana; Tan, Tao; Karssemeijer, Nico; Platel, Bram

    2014-03-01

    Recent studies have demonstrated that applying Automated Breast Ultrasound in addition to mammography in women with dense breasts can lead to additional detection of small, early stage breast cancers which are occult in corresponding mammograms. In this paper, we proposed a fully automatic method for detecting the nipple location in 3D ultrasound breast images acquired from Automated Breast Ultrasound Systems. The nipple location is a valuable landmark to report the position of possible abnormalities in a breast or to guide image registration. To detect the nipple location, all images were normalized. Subsequently, features have been extracted in a multi scale approach and classification experiments were performed using a gentle boost classifier to identify the nipple location. The method was applied on a dataset of 100 patients with 294 different 3D ultrasound views from Siemens and U-systems acquisition systems. Our database is a representative sample of cases obtained in clinical practice by four medical centers. The automatic method could accurately locate the nipple in 90% of AP (Anterior-Posterior) views and in 79% of the other views.

  19. A framework for human spine imaging using a freehand 3D ultrasound system

    NARCIS (Netherlands)

    Purnama, Ketut E.; Wilkinson, Michael. H. F.; Veldhuizen, Albert G.; van Ooijen, Peter. M. A.; Lubbers, Jaap; Burgerhof, Johannes G. M.; Sardjono, Tri A.; Verkerke, Gijbertus J.

    2010-01-01

    The use of 3D ultrasound imaging to follow the progression of scoliosis, i.e., a 3D deformation of the spine, is described. Unlike other current examination modalities, in particular based on X-ray, its non-detrimental effect enables it to be used frequently to follow the progression of scoliosis wh

  20. A framework for human spine imaging using a freehand 3D ultrasound system

    NARCIS (Netherlands)

    Purnama, Ketut E.; Wilkinson, Michael. H. F.; Veldhuizen, Albert G.; van Ooijen, Peter. M. A.; Lubbers, Jaap; Burgerhof, Johannes G. M.; Sardjono, Tri A.; Verkerke, Gijbertus J.

    2010-01-01

    The use of 3D ultrasound imaging to follow the progression of scoliosis, i.e., a 3D deformation of the spine, is described. Unlike other current examination modalities, in particular based on X-ray, its non-detrimental effect enables it to be used frequently to follow the progression of scoliosis wh

  1. 3-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon

    For the last decade, the field of ultrasonic vector flow imaging has gotten an increasingly attention, as the technique offers a variety of new applications for screening and diagnostics of cardiovascular pathologies. The main purpose of this PhD project was therefore to advance the field of 3-D...... ultrasonic vector flow estimation and bring it a step closer to a clinical application. A method for high frame rate 3-D vector flow estimation in a plane using the transverse oscillation method combined with a 1024 channel 2-D matrix array is presented. The proposed method is validated both through phantom......, if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges...

  2. Quasi 3D ECE imaging system for study of MHD instabilities in KSTAR

    Energy Technology Data Exchange (ETDEWEB)

    Yun, G. S., E-mail: gunsu@postech.ac.kr; Choi, M. J.; Lee, J.; Kim, M.; Leem, J.; Nam, Y.; Choe, G. H. [Department of Physics, Pohang University of Science and Technology, Pohang 790-784 (Korea, Republic of); Lee, W.; Park, H. K. [Ulsan National Institute of Science and Technology, Ulsan 689-798 (Korea, Republic of); Park, H.; Woo, D. S.; Kim, K. W. [School of Electrical Engineering, Kyungpook National University, Daegu 702-701 (Korea, Republic of); Domier, C. W.; Luhmann, N. C. [Department of Electrical and Computer Engineering, University of California, Davis, California 95616 (United States); Ito, N. [KASTEC, Kyushu University, Kasuga-shi, Fukuoka 812-8581 (Japan); Mase, A. [Ube National College of Technology, Ube-shi, Yamaguchi 755-8555 (Japan); Lee, S. G. [National Fusion Research Institute, Daejeon 305-333 (Korea, Republic of)

    2014-11-15

    A second electron cyclotron emission imaging (ECEI) system has been installed on the KSTAR tokamak, toroidally separated by 1/16th of the torus from the first ECEI system. For the first time, the dynamical evolutions of MHD instabilities from the plasma core to the edge have been visualized in quasi-3D for a wide range of the KSTAR operation (B{sub 0} = 1.7∼3.5 T). This flexible diagnostic capability has been realized by substantial improvements in large-aperture quasi-optical microwave components including the development of broad-band polarization rotators for imaging of the fundamental ordinary ECE as well as the usual 2nd harmonic extraordinary ECE.

  3. Depth map resolution enhancement for 2D/3D imaging system via compressive sensing

    Science.gov (United States)

    Han, Juanjuan; Loffeld, Otmar; Hartmann, Klaus

    2011-08-01

    This paper introduces a novel approach for post-processing of depth map which enhances the depth map resolution in order to achieve visually pleasing 3D models from a new monocular 2D/3D imaging system consists of a Photonic mixer device (PMD) range camera and a standard color camera. The proposed method adopts the revolutionary inversion theory framework called Compressive Sensing (CS). The depth map of low resolution is considered as the result of applying blurring and down-sampling techniques to that of high-resolution. Based on the underlying assumption that the high-resolution depth map is compressible in frequency domain and recent theoretical work on CS, the high-resolution version can be estimated and furthermore reconstructed via solving non-linear optimization problem. And therefore the improved depth map reconstruction provides a useful help to build an improved 3D model of a scene. The experimental results on the real data are presented. In the meanwhile the proposed scheme opens new possibilities to apply CS to a multitude of potential applications on various multimodal data analysis and processing.

  4. 3-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon

    studies and in vivo. Phantom measurements are compared with their corresponding reference value, whereas the in vivo measurement is validated against the current golden standard for non-invasive blood velocity estimates, based on magnetic resonance imaging (MRI). The study concludes, that a high precision......, if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges......For the last decade, the field of ultrasonic vector flow imaging has gotten an increasingly attention, as the technique offers a variety of new applications for screening and diagnostics of cardiovascular pathologies. The main purpose of this PhD project was therefore to advance the field of 3-D...

  5. CT Image Sequence Analysis for Object Recognition - A Rule-Based 3-D Computer Vision System

    Science.gov (United States)

    Dongping Zhu; Richard W. Conners; Daniel L. Schmoldt; Philip A. Araman

    1991-01-01

    Research is now underway to create a vision system for hardwood log inspection using a knowledge-based approach. In this paper, we present a rule-based, 3-D vision system for locating and identifying wood defects using topological, geometric, and statistical attributes. A number of different features can be derived from the 3-D input scenes. These features and evidence...

  6. 3D biometrics systems and applications

    CERN Document Server

    Zhang, David

    2013-01-01

    Includes discussions on popular 3D imaging technologies, combines them with biometric applications, and then presents real 3D biometric systems Introduces many efficient 3D feature extraction, matching, and fusion algorithms Techniques presented have been supported by experimental results using various 3D biometric classifications

  7. 3D vector flow imaging

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes

    The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... conventional methods can estimate only the axial component. Several approaches for 3D vector velocity estimation have been suggested, but none of these methods have so far produced convincing in vivo results nor have they been adopted by commercial manufacturers. The basis for this project is the Transverse...... on the TO fields are suggested. They can be used to optimize the TO method. In the third part, a TO method for 3D vector velocity estimation is proposed. It employs a 2D phased array transducer and decouples the velocity estimation into three velocity components, which are estimated simultaneously based on 5...

  8. MRI of the cartilages of the knee, 3-D imaging with a rapid computer system

    Energy Technology Data Exchange (ETDEWEB)

    Adam, G.; Bohndorf, K.; Prescher, A.; Drobnitzky, M.; Guenther, R.W.

    1989-01-01

    2-D spin-echo sequences were compared with 3-D gradient-echo sequences using normal and cadaver knee joints. The important advantages of 3-D-imaging are: sections of less than 1 mm, reconstruction in any required plane, which can be related to the complex anatomy of the knee joint, and very good distinction between intra-articular fluid, fibrocartilage and hyaline cartilage. (orig./GDG).

  9. Single Lens Dual-Aperture 3D Imaging System: Color Modeling

    Science.gov (United States)

    Bae, Sam Y.; Korniski, Ronald; Ream, Allen; Fritz, Eric; Shearn, Michael

    2012-01-01

    In an effort to miniaturize a 3D imaging system, we created two viewpoints in a single objective lens camera. This was accomplished by placing a pair of Complementary Multi-band Bandpass Filters (CMBFs) in the aperture area. Two key characteristics about the CMBFs are that the passbands are staggered so only one viewpoint is opened at a time when a light band matched to that passband is illuminated, and the passbands are positioned throughout the visible spectrum, so each viewpoint can render color by taking RGB spectral images. Each viewpoint takes a different spectral image from the other viewpoint hence yielding a different color image relative to the other. This color mismatch in the two viewpoints could lead to color rivalry, where the human vision system fails to resolve two different colors. The difference will be closer if the number of passbands in a CMBF increases. (However, the number of passbands is constrained by cost and fabrication technique.) In this paper, simulation predicting the color mismatch is reported.

  10. An efficient topology adaptation system for parametric active contour segmentation of 3D images

    Science.gov (United States)

    Abhau, Jochen; Scherzer, Otmar

    2008-03-01

    Active contour models have already been used succesfully for segmentation of organs from medical images in 3D. In implicit models, the contour is given as the isosurface of a scalar function, and therefore topology adaptations are handled naturally during a contour evolution. Nevertheless, explicit or parametric models are often preferred since user interaction and special geometric constraints are usually easier to incorporate. Although many researchers have studied topology adaptation algorithms in explicit mesh evolutions, no stable algorithm is known for interactive applications. In this paper, we present a topology adaptation system, which consists of two novel ingredients: A spatial hashing technique is used to detect self-colliding triangles of the mesh whose expected running time is linear with respect to the number of mesh vertices. For the topology change procedure, we have developed formulas by homology theory. During a contour evolution, we just have to choose between a few possible mesh retriangulations by local triangle-triangle intersection tests. Our algorithm has several advantages compared to existing ones: Since the new algorithm does not require any global mesh reparametrizations, it is very efficient. Since the topology adaptation system does not require constant sampling density of the mesh vertices nor especially smooth meshes, mesh evolution steps can be performed in a stable way with a rather coarse mesh. We apply our algorithm to 3D ultrasonic data, showing that accurate segmentation is obtained in some seconds.

  11. Toroidal and poloidal soft X-ray imaging system on the D3-D tokamak

    Science.gov (United States)

    Snider, R.; Evanko, R.; Haskovec, J.

    1988-02-01

    A toroidal soft X-ray imaging system is being added to the currently installed poloidal soft X-ray system on the D3-D tokamak. The poloidal array is used to determine the poloidal mode structure and location of internal helical MHD perturbations in the plasma. The new array will add toroidal mode identification capability. The four detector arrays are toroidally spaced in a manner which allows identification of toroidal mode numbers of up to 24. Beryllium vacuum windows separate the detectors from the tokamak vacuum and also serve as low energy filters. The separate detector vacuum chambers can be filled with a gas which changes the low energy cutoff of the system. By proper selection of the gas and pressure the low energy cutoff can be chosen over the entire range of the detector sensitivity (500 eV to 1200 eV). This capability can be used to produce crude X-ray spectra for the entire imaging system or for gain control.

  12. Scalable, High-performance 3D Imaging Software Platform: System Architecture and Application to Virtual Colonoscopy

    OpenAIRE

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli; Brett, Bevin

    2012-01-01

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. In this work, we have developed a software platform that is designed to support high-performance 3D medical image processing for a wide range of applications using increasingl...

  13. Ultra-compact, High Resolution, LADAR system for 3D Imaging Project

    Data.gov (United States)

    National Aeronautics and Space Administration — SiWave proposes to develop an innovative, ultra-compact, high resolution, long range LADAR system to produce a 3D map of the exterior of any object in space such as...

  14. Feasibility and limitations of an automated 2D-3D rigid image registration system for complex endovascular aortic procedures.

    Science.gov (United States)

    Carrell, Tom W G; Modarai, Bijan; Brown, James R I; Penney, Graeme P

    2010-08-01

    To examine the feasibility of an automated 2-dimensional (2D) to 3- dimensional (3D) image registration system to simplify the navigational challenges faced in complex endovascular aortic procedures. An automated 2D-3D image registration system was used to overlay pre-acquired 3D computed tomography images onto fluoroscopy images taken during endovascular aneurysm repair. Errors between the 3D overlay and digital subtraction angiograms were measured and correlated with aortic neck angulation. A mean discrepancy r = 0.75). Aortas with a maximum neck angulation 30 degrees had a mean error of 6.2+/-2.5 mm (p<0.0001). The major source of registration errors is aortic deformation caused by the presence of the introducer and endovascular graft. Further work is required if this technology is to be routinely applied to severely angulated aortic anatomy.

  15. 3D imaging in forensic odontology.

    Science.gov (United States)

    Evans, Sam; Jones, Carl; Plassmann, Peter

    2010-06-16

    This paper describes the investigation of a new 3D capture method for acquiring and subsequent forensic analysis of bite mark injuries on human skin. When documenting bite marks with standard 2D cameras errors in photographic technique can occur if best practice is not followed. Subsequent forensic analysis of the mark is problematic when a 3D structure is recorded into a 2D space. Although strict guidelines (BAFO) exist, these are time-consuming to follow and, due to their complexity, may produce errors. A 3D image capture and processing system might avoid the problems resulting from the 2D reduction process, simplifying the guidelines and reducing errors. Proposed Solution: a series of experiments are described in this paper to demonstrate that the potential of a 3D system might produce suitable results. The experiments tested precision and accuracy of the traditional 2D and 3D methods. A 3D image capture device minimises the amount of angular distortion, therefore such a system has the potential to create more robust forensic evidence for use in courts. A first set of experiments tested and demonstrated which method of forensic analysis creates the least amount of intra-operator error. A second set tested and demonstrated which method of image capture creates the least amount of inter-operator error and visual distortion. In a third set the effects of angular distortion on 2D and 3D methods of image capture were evaluated.

  16. Transition from Paris dosimetry system to 3D image-guided planning in interstitial breast brachytherapy.

    Science.gov (United States)

    Wiercińska, Judyta; Wronczewska, Anna; Kabacińska, Renata; Makarewicz, Roman

    2015-12-01

    The purpose of this study is to evaluate our first experience with 3D image-guided breast brachytherapy and to compare dose distribution parameters between Paris dosimetry system (PDS) and image-based plans. First 49 breast cancer patients treated with 3D high-dose-rate interstitial brachytherapy as a boost were selected for the study. Every patient underwent computed tomography, and the planning target volume (PTV) and organs at risk (OAR) were outlined. Two treatment plans were created for every patient. First, based on a Paris dosimetry system (PDS), and the second one, imaged-based plan with graphical optimization (OPT). The reference isodose in PDS implants was 85%, whereas in OPT plans the isodose was chosen to obtain proper target coverage. Dose and volume parameters (D90, D100, V90, V100), doses at OARs, total reference air kerma (TRAK), and quality assurance parameters: dose nonuniformity ratio (DNR), dose homogeneity index (DHI), and conformity index (COIN) were used for a comparison of both plans. The mean number of catheters was 7 but the mean for 20 first patients was 5 and almost 9 for the next 29 patients. The mean value of prescribed isodose for OPT plans was 73%. The mean D90 was 88.2% and 105.8%, the D100 was 59.8% and 75.7%, the VPTV90 was 88.6% and 98.1%, the VPTV100 was 79.9% and 98.9%, and the TRAK was 0.00375 Gym(-1) and 0.00439 Gym(-1) for the PDS and OPT plans, respectively. The mean DNR was 0.29 and 0.42, the DHI was 0.71 and 0.58, and the COIN was 0.68 and 0.76, respectively. The target coverage in image-guided plans (OPT) was significantly higher than in PDS plans but the dose homogeneity was worse. Also, the value of TRAK increased because of change of prescribing isodose. The learning curve slightly affected our results.

  17. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  18. Gothic Churches in Paris ST Gervais et ST Protais Image Matching 3d Reconstruction to Understand the Vaults System Geometry

    Science.gov (United States)

    Capone, M.; Campi, M.; Catuogno, R.

    2015-02-01

    This paper is part of a research about ribbed vaults systems in French Gothic Cathedrals. Our goal is to compare some different gothic cathedrals to understand the complex geometry of the ribbed vaults. The survey isn't the main objective but it is the way to verify the theoretical hypotheses about geometric configuration of the flamboyant churches in Paris. The survey method's choice generally depends on the goal; in this case we had to study many churches in a short time, so we chose 3D reconstruction method based on image dense stereo matching. This method allowed us to obtain the necessary information to our study without bringing special equipment, such as the laser scanner. The goal of this paper is to test image matching 3D reconstruction method in relation to some particular study cases and to show the benefits and the troubles. From a methodological point of view this is our workflow: - theoretical study about geometrical configuration of rib vault systems; - 3D model based on theoretical hypothesis about geometric definition of the vaults' form; - 3D model based on image matching 3D reconstruction methods; - comparison between 3D theoretical model and 3D model based on image matching;

  19. Moving-Article X-Ray Imaging System and Method for 3-D Image Generation

    Science.gov (United States)

    Fernandez, Kenneth R. (Inventor)

    2012-01-01

    An x-ray imaging system and method for a moving article are provided for an article moved along a linear direction of travel while the article is exposed to non-overlapping x-ray beams. A plurality of parallel linear sensor arrays are disposed in the x-ray beams after they pass through the article. More specifically, a first half of the plurality are disposed in a first of the x-ray beams while a second half of the plurality are disposed in a second of the x-ray beams. Each of the parallel linear sensor arrays is oriented perpendicular to the linear direction of travel. Each of the parallel linear sensor arrays in the first half is matched to a corresponding one of the parallel linear sensor arrays in the second half in terms of an angular position in the first of the x-ray beams and the second of the x-ray beams, respectively.

  20. Laser Based 3D Volumetric Display System

    Science.gov (United States)

    1993-03-01

    Literature, Costa Mesa, CA July 1983. 3. "A Real Time Autostereoscopic Multiplanar 3D Display System", Rodney Don Williams, Felix Garcia, Jr., Texas...8217 .- NUMBERS LASER BASED 3D VOLUMETRIC DISPLAY SYSTEM PR: CD13 0. AUTHOR(S) PE: N/AWIU: DN303151 P. Soltan, J. Trias, W. Robinson, W. Dahlke 7...laser generated 3D volumetric images on a rotating double helix, (where the 3D displays are computer controlled for group viewing with the naked eye

  1. REGION-BASED 3D SURFACE RECONSTRUCTION USING IMAGES ACQUIRED BY LOW-COST UNMANNED AERIAL SYSTEMS

    Directory of Open Access Journals (Sweden)

    Z. Lari

    2015-08-01

    Full Text Available Accurate 3D surface reconstruction of our environment has become essential for an unlimited number of emerging applications. In the past few years, Unmanned Aerial Systems (UAS are evolving as low-cost and flexible platforms for geospatial data collection that could meet the needs of aforementioned application and overcome limitations of traditional airborne and terrestrial mobile mapping systems. Due to their payload restrictions, these systems usually include consumer-grade imaging and positioning sensor which will negatively impact the quality of the collected geospatial data and reconstructed surfaces. Therefore, new surface reconstruction surfaces are needed to mitigate the impact of using low-cost sensors on the final products. To date, different approaches have been proposed to for 3D surface construction using overlapping images collected by imaging sensor mounted on moving platforms. In these approaches, 3D surfaces are mainly reconstructed based on dense matching techniques. However, generated 3D point clouds might not accurately represent the scanned surfaces due to point density variations and edge preservation problems. In order to resolve these problems, a new region-based 3D surface renostruction trchnique is introduced in this paper. This approach aims to generate a 3D photo-realistic model of individually scanned surfaces within the captured images. This approach is initiated by a Semi-Global dense Matching procedure is carried out to generate a 3D point cloud from the scanned area within the collected images. The generated point cloud is then segmented to extract individual planar surfaces. Finally, a novel region-based texturing technique is implemented for photorealistic reconstruction of the extracted planar surfaces. Experimental results using images collected by a camera mounted on a low-cost UAS demonstrate the feasibility of the proposed approach for photorealistic 3D surface reconstruction.

  2. Region-Based 3d Surface Reconstruction Using Images Acquired by Low-Cost Unmanned Aerial Systems

    Science.gov (United States)

    Lari, Z.; Al-Rawabdeh, A.; He, F.; Habib, A.; El-Sheimy, N.

    2015-08-01

    Accurate 3D surface reconstruction of our environment has become essential for an unlimited number of emerging applications. In the past few years, Unmanned Aerial Systems (UAS) are evolving as low-cost and flexible platforms for geospatial data collection that could meet the needs of aforementioned application and overcome limitations of traditional airborne and terrestrial mobile mapping systems. Due to their payload restrictions, these systems usually include consumer-grade imaging and positioning sensor which will negatively impact the quality of the collected geospatial data and reconstructed surfaces. Therefore, new surface reconstruction surfaces are needed to mitigate the impact of using low-cost sensors on the final products. To date, different approaches have been proposed to for 3D surface construction using overlapping images collected by imaging sensor mounted on moving platforms. In these approaches, 3D surfaces are mainly reconstructed based on dense matching techniques. However, generated 3D point clouds might not accurately represent the scanned surfaces due to point density variations and edge preservation problems. In order to resolve these problems, a new region-based 3D surface renostruction trchnique is introduced in this paper. This approach aims to generate a 3D photo-realistic model of individually scanned surfaces within the captured images. This approach is initiated by a Semi-Global dense Matching procedure is carried out to generate a 3D point cloud from the scanned area within the collected images. The generated point cloud is then segmented to extract individual planar surfaces. Finally, a novel region-based texturing technique is implemented for photorealistic reconstruction of the extracted planar surfaces. Experimental results using images collected by a camera mounted on a low-cost UAS demonstrate the feasibility of the proposed approach for photorealistic 3D surface reconstruction.

  3. High performance 3D adaptive filtering for DSP based portable medical imaging systems

    Science.gov (United States)

    Bockenbach, Olivier; Ali, Murtaza; Wainwright, Ian; Nadeski, Mark

    2015-03-01

    Portable medical imaging devices have proven valuable for emergency medical services both in the field and hospital environments and are becoming more prevalent in clinical settings where the use of larger imaging machines is impractical. Despite their constraints on power, size and cost, portable imaging devices must still deliver high quality images. 3D adaptive filtering is one of the most advanced techniques aimed at noise reduction and feature enhancement, but is computationally very demanding and hence often cannot be run with sufficient performance on a portable platform. In recent years, advanced multicore digital signal processors (DSP) have been developed that attain high processing performance while maintaining low levels of power dissipation. These processors enable the implementation of complex algorithms on a portable platform. In this study, the performance of a 3D adaptive filtering algorithm on a DSP is investigated. The performance is assessed by filtering a volume of size 512x256x128 voxels sampled at a pace of 10 MVoxels/sec with an Ultrasound 3D probe. Relative performance and power is addressed between a reference PC (Quad Core CPU) and a TMS320C6678 DSP from Texas Instruments.

  4. 3D integral imaging with optical processing

    Science.gov (United States)

    Martínez-Corral, Manuel; Martínez-Cuenca, Raúl; Saavedra, Genaro; Javidi, Bahram

    2008-04-01

    Integral imaging (InI) systems are imaging devices that provide auto-stereoscopic images of 3D intensity objects. Since the birth of this new technology, InI systems have faced satisfactorily many of their initial drawbacks. Basically, two kind of procedures have been used: digital and optical procedures. The "3D Imaging and Display Group" at the University of Valencia, with the essential collaboration of Prof. Javidi, has centered its efforts in the 3D InI with optical processing. Among other achievements, our Group has proposed the annular amplitude modulation for enlargement of the depth of field, dynamic focusing for reduction of the facet-braiding effect, or the TRES and MATRES devices to enlarge the viewing angle.

  5. High-speed biometrics ultrasonic system for 3D fingerprint imaging

    Science.gov (United States)

    Maev, Roman G.; Severin, Fedar

    2012-10-01

    The objective of this research is to develop a new robust fingerprint identification technology based upon forming surface-subsurface (under skin) ultrasonic 3D images of the finger pads. The presented work aims to create specialized ultrasonic scanning methods for biometric purposes. Preliminary research has demonstrated the applicability of acoustic microscopy for fingerprint reading. The additional information from internal skin layers and dermis structures contained in the scan can essentially improve confidence in the identification. Advantages of this system include high resolution and quick scanning time. Operating in pulse-echo mode provides spatial resolution up to 0.05 mm. Technology advantages of the proposed technology are the following: • Full-range scanning of the fingerprint area "nail to nail" (2.5 x 2.5 cm) can be done in less than 5 sec with a resolution of up to 1000 dpi. • Collection of information about the in-depth structure of the fingerprint realized by the set of spherically focused 50 MHz acoustic lens provide the resolution ~ 0.05 mm or better • In addition to fingerprints, this technology can identify sweat porous at the surface and under the skin • No sensitivity to the contamination of the finger's surface • Detection of blood velocity using Doppler effect can be implemented to distinguish living specimens • Utilization as polygraph device • Simple connectivity to fingerprint databases obtained with other techniques • The digitally interpolated images can then be enhanced allowing for greater resolution • Method can be applied to fingernails and underlying tissues, providing more information • A laboratory prototype of the biometrics system based on these described principles was designed, built and tested. It is the first step toward a practical implementation of this technique.

  6. 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    The aim of this project has been to implement a software system, that is able to create a 3-D reconstruction from two or more 2-D photographic images made from different positions. The height is determined from the disparity difference of the images. The general purpose of the system is mapping o......, where various methods have been tested in order to optimize the performance. The match results are used in the reconstruction part to establish a 3-D digital representation and finally, different presentation forms are discussed....

  7. Advanced 3D Object Identification System Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Optra will build an Advanced 3D Object Identification System utilizing three or more high resolution imagers spaced around a launch platform. Data from each imager...

  8. Twin robotic x-ray system for 2D radiographic and 3D cone-beam CT imaging

    Science.gov (United States)

    Fieselmann, Andreas; Steinbrener, Jan; Jerebko, Anna K.; Voigt, Johannes M.; Scholz, Rosemarie; Ritschl, Ludwig; Mertelmeier, Thomas

    2016-03-01

    In this work, we provide an initial characterization of a novel twin robotic X-ray system. This system is equipped with two motor-driven telescopic arms carrying X-ray tube and flat-panel detector, respectively. 2D radiographs and fluoroscopic image sequences can be obtained from different viewing angles. Projection data for 3D cone-beam CT reconstruction can be acquired during simultaneous movement of the arms along dedicated scanning trajectories. We provide an initial evaluation of the 3D image quality based on phantom scans and clinical images. Furthermore, initial evaluation of patient dose is conducted. The results show that the system delivers high image quality for a range of medical applications. In particular, high spatial resolution enables adequate visualization of bone structures. This system allows 3D X-ray scanning of patients in standing and weight-bearing position. It could enable new 2D/3D imaging workflows in musculoskeletal imaging and improve diagnosis of musculoskeletal disorders.

  9. 3D自动成像系统%3D Automatic Imaging System

    Institute of Scientific and Technical Information of China (English)

    刘贻圳; 吴俊耦; 魏嘉裕; 吕燚

    2015-01-01

    针对传统摄影成本高、效率低下、拍摄质量不高、局限于平面化等问题,给出了3D自动成像系统的实现方案.以串口通信技术和图像合成技术为主要研究手段,突破传统摄影弊端,从此让产品摄影简单化、全景化.他们只需配备一台电脑和单反相机,系统通过3D自动成像软件控制单反相机及智能旋转平台对产品进行多角度拍摄,便能在短短的几分钟内可合成出html5格式的产品360°展示动画,而做出来的360°展示动画可以上传到他们的网站、网店、还可以给客户发送产品样板的360°全景展示效果,只要客户点开该360°全景展示动画,就能如实地观看到产品的不同角度跟细节,大大提高了用户体验度,从而增加产品销量.%In view of the problems such as high cost,low efficiency,low quality of shooting and limited to the plane, the implementation scheme of 3D automatic imaging system is given.It uses the serial communication technology as the main research means,breaking through of the traditional photography,and it makes product photography simple and panoramic from now on. They only need to equipped with a computer and Single Lens Reflex camera, the system control Single Lens Reflex camera and intelligent rotating platform for many angles shooting through 3D automatic imaging software,then can synthesize the html5 product that is 360 degrees display animation in a few minutes.And made out of the 360 degree display animation can be uploaded to the website and online shop and the sample of the 360 degrees panoramic display effect can be sent to customers. As long as the customer points to open the 360 degrees panoramic display animation; they can accurately view the product's different angles and details. It greatly improves the user experience, so as to increase product sales.

  10. Portable high-intensity focused ultrasound system with 3D electronic steering, real-time cavitation monitoring, and 3D image reconstruction algorithms: a preclinical study in pigs

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jin Woo; Lee, Jae Young; Hwang, Eui Jin; Hwang, In Pyeong; Woo, Sung Min; Lee, Chang Joo; Park, Eun Joo; Choi, Byung Ihn [Dept. of Radiology and Institute of Radiation Medicine, Seoul National University Hospital, Seoul (Korea, Republic of)

    2014-10-15

    The aim of this study was to evaluate the safety and accuracy of a new portable ultrasonography-guided high-intensity focused ultrasound (USg-HIFU) system with a 3-dimensional (3D) electronic steering transducer, a simultaneous ablation and imaging module, real-time cavitation monitoring, and 3D image reconstruction algorithms. To address the accuracy of the transducer, hydrophones in a water chamber were used to assess the generation of sonic fields. An animal study was also performed in five pigs by ablating in vivo thighs by single-point sonication (n=10) or volume sonication (n=10) and ex vivo kidneys by single-point sonication (n=10). Histological and statistical analyses were performed. In the hydrophone study, peak voltages were detected within 1.0 mm from the targets on the y- and z-axes and within 2.0-mm intervals along the x-axis (z-axis, direction of ultrasound propagation; y- and x-axes, perpendicular to the direction of ultrasound propagation). Twenty-nine of 30 HIFU sessions successfully created ablations at the target. The in vivo porcine thigh study showed only a small discrepancy (width, 0.5-1.1 mm; length, 3.0 mm) between the planning ultrasonograms and the pathological specimens. Inordinate thermal damage was not observed in the adjacent tissues or sonic pathways in the in vivo thigh and ex vivo kidney studies. Our study suggests that this new USg-HIFU system may be a safe and accurate technique for ablating soft tissues and encapsulated organs.

  11. 3D functional ultrasound imaging of the cerebral visual system in rodents.

    Science.gov (United States)

    Gesnik, Marc; Blaize, Kevin; Deffieux, Thomas; Gennisson, Jean-Luc; Sahel, José-Alain; Fink, Mathias; Picaud, Serge; Tanter, Mickaël

    2017-02-03

    3D functional imaging of the whole brain activity during visual task is a challenging task in rodents due to the complex tri-dimensional shape of involved brain regions and the fine spatial and temporal resolutions required to reveal the visual tract. By coupling functional ultrasound (fUS) imaging with a translational motorized stage and an episodic visual stimulation device, we managed to accurately map and to recover the activity of the visual cortices, the Superior Colliculus (SC) and the Lateral Geniculate Nuclei (LGN) in 3D. Cerebral Blood Volume (CBV) responses during visual stimuli were found to be highly correlated with the visual stimulus time profile in visual cortices (r=0.6), SC (r=0.7) and LGN (r=0.7). These responses were found dependent on flickering frequency and contrast, and optimal stimulus parameters for largest CBV increases were obtained. In particular, increasing the flickering frequency higher than 7Hz revealed a decrease of visual cortices response while the SC response was preserved. Finally, cross-correlation between CBV signals exhibited significant delays (d=0.35s +/-0.1s) between blood volume response in SC and visual cortices in response to our visual stimulus. These results emphasize the interest of fUS imaging as a whole brain neuroimaging modality for brain vision studies in rodent models.

  12. 3D vision system assessment

    Science.gov (United States)

    Pezzaniti, J. Larry; Edmondson, Richard; Vaden, Justin; Hyatt, Bryan; Chenault, David B.; Kingston, David; Geulen, Vanilynmae; Newell, Scott; Pettijohn, Brad

    2009-02-01

    In this paper, we report on the development of a 3D vision system consisting of a flat panel stereoscopic display and auto-converging stereo camera and an assessment of the system's use for robotic driving, manipulation, and surveillance operations. The 3D vision system was integrated onto a Talon Robot and Operator Control Unit (OCU) such that direct comparisons of the performance of a number of test subjects using 2D and 3D vision systems were possible. A number of representative scenarios were developed to determine which tasks benefited most from the added depth perception and to understand when the 3D vision system hindered understanding of the scene. Two tests were conducted at Fort Leonard Wood, MO with noncommissioned officers ranked Staff Sergeant and Sergeant First Class. The scenarios; the test planning, approach and protocols; the data analysis; and the resulting performance assessment of the 3D vision system are reported.

  13. Design and performance of a fiber array coupled multi-channel photon counting, 3D imaging, airborne lidar system

    Science.gov (United States)

    Huang, Genghua; Shu, Rong; Hou, Libing; Li, Ming

    2014-06-01

    Photon counting lidar has an ultra-high sensitivity which can be hundreds even thousands of times higher than the linear detection lidar. It can significantly increase the system's capability of detection rang and imaging density, saving size and power consumings in airborne or space-borne applications. Based on Geiger-mode Si avalanche photodiodes (Si-APD), a prototype photon counting lidar which used 8 APDs coupled with a 1×8-pixel fiber array has been made in June, 2011. The experiments with static objects showed that the photon counting lidar could operate in strong solar background with 0.04 receiving photoelectrons on average. Limited by less counting times in moving platforms, the probability of detection and the 3D imaging density would be lower than that in static platforms. In this paper, a latest fiber array coupled multi-channel photon counting, 3D imaging, airborne lidar system is introduced. The correlation range receiver algorithm of photon counting 3D imaging is improved for airborne signal photon events extraction and noise filter. The 3D imaging experiments in the helicopter shows that the false alarm rate is less than 6×10-7, and the correct rate is better than 99.9% with 4 received photoelectrons and 0.7MHz system noise on average.

  14. Technical validation of the Di3D stereophotogrammetry surface imaging system

    DEFF Research Database (Denmark)

    Winder, R.J.; Darvann, Tron Andre; McKnight, W.

    2008-01-01

    The purpose of this work was to assess the technical performance of a three-dimensional surface imaging system for geometric accuracy and maximum field of view. The system was designed for stereophotogrammetry capture of digital images from three-dimensional surfaces of the head, face, and neck...

  15. User-guided segmentation of preterm neonate ventricular system from 3-D ultrasound images using convex optimization.

    Science.gov (United States)

    Qiu, Wu; Yuan, Jing; Kishimoto, Jessica; McLeod, Jonathan; Chen, Yimin; de Ribaupierre, Sandrine; Fenster, Aaron

    2015-02-01

    A three-dimensional (3-D) ultrasound (US) system has been developed to monitor the intracranial ventricular system of preterm neonates with intraventricular hemorrhage (IVH) and the resultant dilation of the ventricles (ventriculomegaly). To measure ventricular volume from 3-D US images, a semi-automatic convex optimization-based approach is proposed for segmentation of the cerebral ventricular system in preterm neonates with IVH from 3-D US images. The proposed semi-automatic segmentation method makes use of the convex optimization technique supervised by user-initialized information. Experiments using 58 patient 3-D US images reveal that our proposed approach yielded a mean Dice similarity coefficient of 78.2% compared with the surfaces that were manually contoured, suggesting good agreement between these two segmentations. Additional metrics, the mean absolute distance of 0.65 mm and the maximum absolute distance of 3.2 mm, indicated small distance errors for a voxel spacing of 0.22 × 0.22 × 0.22 mm(3). The Pearson correlation coefficient (r = 0.97, p < 0.001) indicated a significant correlation of algorithm-generated ventricular system volume (VSV) with the manually generated VSV. The calculated minimal detectable difference in ventricular volume change indicated that the proposed segmentation approach with 3-D US images is capable of detecting a VSV difference of 6.5 cm(3) with 95% confidence, suggesting that this approach might be used for monitoring IVH patients' ventricular changes using 3-D US imaging. The mean segmentation times of the graphics processing unit (GPU)- and central processing unit-implemented algorithms were 50 ± 2 and 205 ± 5 s for one 3-D US image, respectively, in addition to 120 ± 10 s for initialization, less than the approximately 35 min required by manual segmentation. In addition, repeatability experiments indicated that the intra-observer variability ranges from 6.5% to 7.5%, and the inter-observer variability is 8.5% in terms

  16. 3D cerebral MR image segmentation using multiple-classifier system.

    Science.gov (United States)

    Amiri, Saba; Movahedi, Mohammad Mehdi; Kazemi, Kamran; Parsaei, Hossein

    2017-03-01

    The three soft brain tissues white matter (WM), gray matter (GM), and cerebral spinal fluid (CSF) identified in a magnetic resonance (MR) image via image segmentation techniques can aid in structural and functional brain analysis, brain's anatomical structures measurement and visualization, neurodegenerative disorders diagnosis, and surgical planning and image-guided interventions, but only if obtained segmentation results are correct. This paper presents a multiple-classifier-based system for automatic brain tissue segmentation from cerebral MR images. The developed system categorizes each voxel of a given MR image as GM, WM, and CSF. The algorithm consists of preprocessing, feature extraction, and supervised classification steps. In the first step, intensity non-uniformity in a given MR image is corrected and then non-brain tissues such as skull, eyeballs, and skin are removed from the image. For each voxel, statistical features and non-statistical features were computed and used a feature vector representing the voxel. Three multilayer perceptron (MLP) neural networks trained using three different datasets were used as the base classifiers of the multiple-classifier system. The output of the base classifiers was fused using majority voting scheme. Evaluation of the proposed system was performed using Brainweb simulated MR images with different noise and intensity non-uniformity and internet brain segmentation repository (IBSR) real MR images. The quantitative assessment of the proposed method using Dice, Jaccard, and conformity coefficient metrics demonstrates improvement (around 5 % for CSF) in terms of accuracy as compared to single MLP classifier and the existing methods and tools such FSL-FAST and SPM. As accurately segmenting a MR image is of paramount importance for successfully promoting the clinical application of MR image segmentation techniques, the improvement obtained by using multiple-classifier-based system is encouraging.

  17. Computational Validation of a 3-D Microwave Imaging System for Breast-Cancer Screening

    DEFF Research Database (Denmark)

    Rubæk, Tonny; Kim, Oleksiy S.; Meincke, Peter

    2009-01-01

    -of-moments solution of the associated forward scattering problem. A cylindrical multistatic antenna setup with 32 horizontally oriented antennas is used for collecting the data. It has been found that formulating the imaging algorithm in terms of the logarithm of the amplitude and the unwrapped phase of the measured......The microwave imaging system currently being developed at the Technical University of Denmark is described and its performance tested on simulated data. The system uses an iterative Newton-based imaging algorithm for reconstructing the images in conjunction with an efficient method...... in the measurement system is shown by imaging the same breast model using a measurement setup in which the antennas are vertically oriented....

  18. A normalized thoracic coordinate system for atlas mapping in 3D CT images

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In this paper, a normalized thoracic coordinate system (NTCS) is defined for rapidly mapping the 4D thoracic organ atlas into individual CT volume images. This coordinate system is defined based on the thoracic skeleton. The coordinate values are normalized by the size of the individual thorax so that this coordinate system is universal to different individuals. For compensating the respiratory motion of the organs, a 4D dynamic torso atlas is introduced. A method for mapping this dynamic atlas into the individual image using the NTCS is also proposed. With this method, the dynamic atlas was mapped into the clinical thoracic CT images and rough positions of the organs were found rapidly. This NTCS-based 4D atlas mapping method may provide a novel way for estimating the thoracic organ positions in low-resolution molecular imaging modalities, as well as in modern 4D medical images.

  19. Simultaneous submicrometric 3D imaging of the micro-vascular network and the neuronal system in a mouse spinal cord

    CERN Document Server

    Fratini, Michela; Campi, Gaetano; Brun, Francesco; Tromba, Giuliana; Modregger, Peter; Bucci, Domenico; Battaglia, Giuseppe; Spadon, Raffaele; Mastrogiacomo, Maddalena; Requardt, Herwig; Giove, Federico; Bravin, Alberto; Cedola, Alessia

    2014-01-01

    Defaults in vascular (VN) and neuronal networks of spinal cord are responsible for serious neurodegenerative pathologies. Because of inadequate investigation tools, the lacking knowledge of the complete fine structure of VN and neuronal systems is a crucial problem. Conventional 2D imaging yields incomplete spatial coverage leading to possible data misinterpretation, whereas standard 3D computed tomography imaging achieves insufficient resolution and contrast. We show that X-ray high-resolution phase-contrast tomography allows the simultaneous visualization of three-dimensional VN and neuronal systems of mouse spinal cord at scales spanning from millimeters to hundreds of nanometers, with neither contrast agent nor a destructive sample-preparation. We image both the 3D distribution of micro-capillary network and the micrometric nerve fibers, axon-bundles and neuron soma. Our approach is a crucial tool for pre-clinical investigation of neurodegenerative pathologies and spinal-cord-injuries. In particular, it s...

  20. Comparison of different approaches of estimating effective dose from reported exposure data in 3D imaging with interventional fluoroscopy systems

    Science.gov (United States)

    Svalkvist, Angelica; Hansson, Jonny; Bâth, Magnus

    2014-03-01

    Three-dimensional (3D) imaging with interventional fluoroscopy systems is today a common examination. The examination includes acquisition of two-dimensional projection images, used to reconstruct section images of the patient. The aim of the present study was to investigate the difference in resulting effective dose obtained using different levels of complexity in calculations of effective doses from these examinations. In the study the Siemens Artis Zeego interventional fluoroscopy system (Siemens Medical Solutions, Erlangen, Germany) was used. Images of anthropomorphic chest and pelvis phantoms were acquired. The exposure values obtained were used to calculate the resulting effective doses from the examinations, using the computer software PCXMC (STUK, Helsinki, Finland). The dose calculations were performed using three different methods: 1. using individual exposure values for each projection image, 2. using the mean tube voltage and the total DAP value, evenly distributed over the projection images, and 3. using the mean kV and the total DAP value, evenly distributed over smaller selection of projection images. The results revealed that the difference in resulting effective dose between the first two methods was smaller than 5%. When only a selection of projection images were used in the dose calculations the difference increased to over 10%. Given the uncertainties associated with the effective dose concept, the results indicate that dose calculations based on average exposure values distributed over a smaller selection of projection angles can provide reasonably accurate estimations of the radiation doses from 3D imaging using interventional fluoroscopy systems.

  1. Research progress of depth detection in vision measurement: a novel project of bifocal imaging system for 3D measurement

    Science.gov (United States)

    Li, Anhu; Ding, Ye; Wang, Wei; Zhu, Yongjian; Li, Zhizhong

    2013-09-01

    The paper reviews the recent research progresses of vision measurement. The general methods of the depth detection used in the monocular stereo vision are compared with each other. As a result, a novel bifocal imaging measurement system based on the zoom method is proposed to solve the problem of the online 3D measurement. This system consists of a primary lens and a secondary one with the different focal length matching to meet the large-range and high-resolution imaging requirements without time delay and imaging errors, which has an important significance for the industry application.

  2. Evolving technologies for growing, imaging and analyzing 3D root system architecture of crop plants

    Institute of Scientific and Technical Information of China (English)

    Miguel A Pineros; Pierre-Luc Pradier; Nathanael M Shaw; Ithipong Assaranurak; Susan R McCouch; Craig Sturrock; Malcolm Bennett; Leon V Kochian; Brandon G Larson; Jon E Shaff; David J Schneider; Alexandre Xavier Falcao; Lixing Yuan; Randy T Clark; Eric J Craft; Tyler W Davis

    2016-01-01

    A plant’s ability to maintain or improve its yield under limiting conditions, such as nutrient deficiency or drought, can be strongly influenced by root system architec-ture (RSA), the three-dimensional distribution of the different root types in the soil. The ability to image, track and quantify these root system attributes in a dynamic fashion is a useful tool in assessing desirable genetic and physiological root traits. Recent advances in imaging technology and phenotyp-ing software have resulted in substantive progress in describing and quantifying RSA. We have designed a hydroponic growth system which retains the three-dimen-sional RSA of the plant root system, while allowing for aeration, solution replenishment and the imposition of nutrient treatments, as well as high-quality imaging of the root system. The simplicity and flexibility of the system allows for modifications tailored to the RSA of different crop species and improved throughput. This paper details the recent improvements and innovations in our root growth and imaging system which allows for greater image sensitivity (detection of fine roots and other root details), higher efficiency, and a broad array of growing conditions for plants that more closely mimic those found under field conditions.

  3. Development of a noncontact 3-D fluorescence tomography system for small animal in vivo imaging

    Science.gov (United States)

    Zhang, Xiaofeng; Badea, Cristian; Jacob, Mathews; Johnson, G. Allan

    2009-02-01

    Fluorescence imaging is an important tool for tracking molecular-targeting probes in preclinical studies. It offers high sensitivity, but nonetheless low spatial resolution compared to other leading imaging methods such CT and MRI. We demonstrate our methodological development in small animal in vivo whole-body imaging using fluorescence tomography. We have implemented a noncontact fluid-free fluorescence diffuse optical tomography system that uses a raster-scanned continuous-wave diode laser as the light source and an intensified CCD camera as the photodetector. The specimen is positioned on a motorized rotation stage. Laser scanning, data acquisition, and stage rotation are controlled via LabVIEW applications. The forward problem in the heterogeneous medium is based on a normalized Born method, and the sensitivity function is determined using a Monte Carlo method. The inverse problem (image reconstruction) is performed using a regularized iterative algorithm, in which the cost function is defined as a weighted sum of the L-2 norms of the solution image, the residual error, and the image gradient. The relative weights are adjusted by two independent regularization parameters. Our initial tests of this imaging system were performed with an imaging phantom that consists of a translucent plastic cylinder filled with tissue-simulating liquid and two thin-wall glass tubes containing indocyanine green. The reconstruction is compared to the output of a finite element method-based software package NIRFAST and has produced promising results.

  4. Micromachined Ultrasonic Transducers for 3-D Imaging

    DEFF Research Database (Denmark)

    Christiansen, Thomas Lehrmann

    such transducer arrays, capacitive micromachined ultrasonic transducer (CMUT) technology is chosen for this project. Properties such as high bandwidth and high design flexibility makes this an attractive transducer technology, which is under continuous development in the research community. A theoretical...... of state-of-the-art 3-D ultrasound systems. The focus is on row-column addressed transducer arrays. This previously sparsely investigated addressing scheme offers a highly reduced number of transducer elements, resulting in reduced transducer manufacturing costs and data processing. To produce......Real-time ultrasound imaging is a widely used technique in medical diagnostics. Recently, ultrasound systems offering real-time imaging in 3-D has emerged. However, the high complexity of the transducer probes and the considerable increase in data to be processed compared to conventional 2-D...

  5. Proposed NRC portable target case for short-range triangulation-based 3D imaging systems characterization

    Science.gov (United States)

    Carrier, Benjamin; MacKinnon, David; Cournoyer, Luc; Beraldin, J.-Angelo

    2011-03-01

    The National Research Council of Canada (NRC) is currently evaluating and designing artifacts and methods to completely characterize 3-D imaging systems. We have gathered a set of artifacts to form a low-cost portable case and provide a clearly-defined set of procedures for generating characteristic values using these artifacts. In its current version, this case is specifically designed for the characterization of short-range (standoff distance of 1 centimeter to 3 meters) triangulation-based 3-D imaging systems. The case is known as the "NRC Portable Target Case for Short-Range Triangulation-based 3-D Imaging Systems" (NRC-PTC). The artifacts in the case have been carefully chosen for their geometric, thermal, and optical properties. A set of characterization procedures are provided with these artifacts based on procedures either already in use or are based on knowledge acquired from various tests carried out by the NRC. Geometric dimensioning and tolerancing (GD&T), a well-known terminology in the industrial field, was used to define the set of tests. The following parameters of a system are characterized: dimensional properties, form properties, orientation properties, localization properties, profile properties, repeatability, intermediate precision, and reproducibility. A number of tests were performed in a special dimensional metrology laboratory to validate the capability of the NRC-PTC. The NRC-PTC will soon be subjected to reproducibility testing using an intercomparison evaluation to validate its use in different laboratories.

  6. Detection of hidden objects using a real-time 3-D millimeter-wave imaging system

    Science.gov (United States)

    Rozban, Daniel; Aharon, Avihai; Levanon, Assaf; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, N. S.

    2014-10-01

    Millimeter (mm)and sub-mm wavelengths or terahertz (THz) band have several properties that motivate their use in imaging for security applications such as recognition of hidden objects, dangerous materials, aerosols, imaging through walls as in hostage situations, and also in bad weather conditions. There is no known ionization hazard for biological tissue, and atmospheric degradation of THz radiation is relatively low for practical imaging distances. We recently developed a new technology for the detection of THz radiation. This technology is based on very inexpensive plasma neon indicator lamps, also known as Glow Discharge Detector (GDD), that can be used as very sensitive THz radiation detectors. Using them, we designed and constructed a Focal Plane Array (FPA) and obtained recognizable2-dimensional THz images of both dielectric and metallic objects. Using THz wave it is shown here that even concealed weapons made of dielectric material can be detected. An example is an image of a knife concealed inside a leather bag and also under heavy clothing. Three-dimensional imaging using radar methods can enhance those images since it can allow the isolation of the concealed objects from the body and environmental clutter such as nearby furniture or other people. The GDDs enable direct heterodyning between the electric field of the target signal and the reference signal eliminating the requirement for expensive mixers, sources, and Low Noise Amplifiers (LNAs).We expanded the ability of the FPA so that we are able to obtain recognizable 2-dimensional THz images in real time. We show here that the THz detection of objects in three dimensions, using FMCW principles is also applicable in real time. This imaging system is also shown here to be capable of imaging objects from distances allowing standoff detection of suspicious objects and humans from large distances.

  7. Automatic 3D City Modeling Using a Digital Map and Panoramic Images from a Mobile Mapping System

    Directory of Open Access Journals (Sweden)

    Hyungki Kim

    2014-01-01

    Full Text Available Three-dimensional city models are becoming a valuable resource because of their close geospatial, geometrical, and visual relationship with the physical world. However, ground-oriented applications in virtual reality, 3D navigation, and civil engineering require a novel modeling approach, because the existing large-scale 3D city modeling methods do not provide rich visual information at ground level. This paper proposes a new framework for generating 3D city models that satisfy both the visual and the physical requirements for ground-oriented virtual reality applications. To ensure its usability, the framework must be cost-effective and allow for automated creation. To achieve these goals, we leverage a mobile mapping system that automatically gathers high-resolution images and supplements sensor information such as the position and direction of the captured images. To resolve problems stemming from sensor noise and occlusions, we develop a fusion technique to incorporate digital map data. This paper describes the major processes of the overall framework and the proposed techniques for each step and presents experimental results from a comparison with an existing 3D city model.

  8. ROAD SIGNS DETECTION AND RECOGNITION UTILIZING IMAGES AND 3D POINT CLOUD ACQUIRED BY MOBILE MAPPING SYSTEM

    Directory of Open Access Journals (Sweden)

    Y. H. Li

    2016-06-01

    Full Text Available High-definition and highly accurate road maps are necessary for the realization of automated driving, and road signs are among the most important element in the road map. Therefore, a technique is necessary which can acquire information about all kinds of road signs automatically and efficiently. Due to the continuous technical advancement of Mobile Mapping System (MMS, it has become possible to acquire large number of images and 3d point cloud efficiently with highly precise position information. In this paper, we present an automatic road sign detection and recognition approach utilizing both images and 3D point cloud acquired by MMS. The proposed approach consists of three stages: 1 detection of road signs from images based on their color and shape features using object based image analysis method, 2 filtering out of over detected candidates utilizing size and position information estimated from 3D point cloud, region of candidates and camera information, and 3 road sign recognition using template matching method after shape normalization. The effectiveness of proposed approach was evaluated by testing dataset, acquired from more than 180 km of different types of roads in Japan. The results show a very high success in detection and recognition of road signs, even under the challenging conditions such as discoloration, deformation and in spite of partial occlusions.

  9. High resolution 3-D wavelength diversity imaging

    Science.gov (United States)

    Farhat, N. H.

    1981-09-01

    A physical optics, vector formulation of microwave imaging of perfectly conducting objects by wavelength and polarization diversity is presented. The results provide the theoretical basis for optimal data acquisition and three-dimensional tomographic image retrieval procedures. These include: (a) the selection of highly thinned (sparse) receiving array arrangements capable of collecting large amounts of information about remote scattering objects in a cost effective manner and (b) techniques for 3-D tomographic image reconstruction and display in which polarization diversity data is fully accounted for. Data acquisition employing a highly attractive AMTDR (Amplitude Modulated Target Derived Reference) technique is discussed and demonstrated by computer simulation. Equipment configuration for the implementation of the AMTDR technique is also given together with a measurement configuration for the implementation of wavelength diversity imaging in a roof experiment aimed at imaging a passing aircraft. Extension of the theory presented to 3-D tomographic imaging of passive noise emitting objects by spectrally selective far field cross-correlation measurements is also given. Finally several refinements made in our anechoic-chamber measurement system are shown to yield drastic improvement in performance and retrieved image quality.

  10. Subjective evaluation of two stereoscopic imaging systems exploiting visual attention to improve 3D quality of experience

    Science.gov (United States)

    Hanhart, Philippe; Ebrahimi, Touradj

    2014-03-01

    Crosstalk and vergence-accommodation rivalry negatively impact the quality of experience (QoE) provided by stereoscopic displays. However, exploiting visual attention and adapting the 3D rendering process on the fly can reduce these drawbacks. In this paper, we propose and evaluate two different approaches that exploit visual attention to improve 3D QoE on stereoscopic displays: an offline system, which uses a saliency map to predict gaze position, and an online system, which uses a remote eye tracking system to measure real time gaze positions. The gaze points were used in conjunction with the disparity map to extract the disparity of the object-of-interest. Horizontal image translation was performed to bring the fixated object on the screen plane. The user preference between standard 3D mode and the two proposed systems was evaluated through a subjective evaluation. Results show that exploiting visual attention significantly improves image quality and visual comfort, with a slight advantage for real time gaze determination. Depth quality is also improved, but the difference is not significant.

  11. 3D object-oriented image analysis in 3D geophysical modelling

    DEFF Research Database (Denmark)

    Fadel, I.; van der Meijde, M.; Kerle, N.

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  12. Medical 3D thermography system

    OpenAIRE

    GRUBIŠIĆ, IVAN

    2011-01-01

    Infrared (IR) thermography determines the surface temperature of an object or human body using thermal IR measurement camera. It is an imaging technology which is contactless and completely non-invasive. These propertiesmake IR thermography a useful method of analysis that is used in various industrial applications to detect, monitor and predict irregularities in many fields from engineering to medical and biological observations. This paper presents a conceptual model of Medical 3D Thermo...

  13. Augmented reality 3D display based on integral imaging

    Science.gov (United States)

    Deng, Huan; Zhang, Han-Le; He, Min-Yang; Wang, Qiong-Hua

    2017-02-01

    Integral imaging (II) is a good candidate for augmented reality (AR) display, since it provides various physiological depth cues so that viewers can freely change the accommodation and convergence between the virtual three-dimensional (3D) images and the real-world scene without feeling any visual discomfort. We propose two AR 3D display systems based on the theory of II. In the first AR system, a micro II display unit reconstructs a micro 3D image, and the mciro-3D image is magnified by a convex lens. The lateral and depth distortions of the magnified 3D image are analyzed and resolved by the pitch scaling and depth scaling. The magnified 3D image and real 3D scene are overlapped by using a half-mirror to realize AR 3D display. The second AR system uses a micro-lens array holographic optical element (HOE) as an image combiner. The HOE is a volume holographic grating which functions as a micro-lens array for the Bragg-matched light, and as a transparent glass for Bragg mismatched light. A reference beam can reproduce a virtual 3D image from one side and a reference beam with conjugated phase can reproduce the second 3D image from other side of the micro-lens array HOE, which presents double-sided 3D display feature.

  14. 3D Imaging with Structured Illumination for Advanced Security Applications

    Energy Technology Data Exchange (ETDEWEB)

    Birch, Gabriel Carisle [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dagel, Amber Lynn [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kast, Brian A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Smith, Collin S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Three-dimensional (3D) information in a physical security system is a highly useful dis- criminator. The two-dimensional data from an imaging systems fails to provide target dis- tance and three-dimensional motion vector, which can be used to reduce nuisance alarm rates and increase system effectiveness. However, 3D imaging devices designed primarily for use in physical security systems are uncommon. This report discusses an architecture favorable to physical security systems; an inexpensive snapshot 3D imaging system utilizing a simple illumination system. The method of acquiring 3D data, tests to understand illumination de- sign, and software modifications possible to maximize information gathering capability are discussed.

  15. 3D image-guided robotic needle positioning system for small animal interventions.

    Science.gov (United States)

    Bax, Jeffrey S; Waring, Christopher S R; Sherebrin, Shi; Stapleton, Shawn; Hudson, Thomas J; Jaffray, David A; Lacefield, James C; Fenster, Aaron

    2013-01-01

    This paper presents the design of a micro-CT guided small animal robotic needle positioning system. In order to simplify the robotic design and maintain a small targeting error, a novel implementation of the remote center of motion is used in the system. The system has been developed with the objective of achieving a mean targeting error of <200 μm while maintaining a high degree of user friendliness. The robot is compact enough to operate within a 25 cm diameter micro-CT bore. Small animals can be imaged and an intervention performed without the need to transport the animal from one workspace to another. Not requiring transport of the animal reduces opportunities for targets to shift from their localized position in the image and simplifies the workflow of interventions. An improved method of needle calibration is presented that better characterizes the calibration using the position of the needle tip in photographs rather than the needle axis. A calibration fixture was also introduced, which dramatically reduces the time requirements of calibration while maintaining calibration accuracy. Two registration modes have been developed to correspond the robot coordinate system with the coordinate system of the micro-CT scanner. The two registration modes offer a balance between the time required to complete a registration and the overall registration accuracy. The development of slow high accuracy and fast low accuracy registration modes provides users with a degree of flexibility in selecting a registration mode best suited for their application. The target registration error (TRE) of the higher accuracy primary registration was TRE(primary) = 31 ± 12 μm. The error in the lower accuracy combined registration was TRE(combined) = 139 ± 63 μm. Both registration modes are therefore suitable for small-animal needle interventions. The targeting accuracy of the robotic system was characterized using targeting experiments in tissue-mimicking gelatin phantoms. The results

  16. Calculation of the Slip System Activity in Deformed Zinc Single Crystals Using Digital 3-D Image Correlation Data

    Energy Technology Data Exchange (ETDEWEB)

    Florando, J; Rhee, M; Arsenlis, A; LeBlanc, M; Lassila, D

    2006-02-21

    A 3-D image correlation system, which measures the full-field displacements in 3 dimensions, has been used to experimentally determine the full deformation gradient matrix for two zinc single crystals. Based on the image correlation data, the slip system activity for the two crystals has been calculated. The results of the calculation show that for one crystal, only the primary slip system is active, which is consistent with traditional theory. The other crystal however, shows appreciable deformation on slip systems other than the primary. An analysis has been conducted which confirms the experimental observation that these other slip system deform in such a manner that the net result is slip which is approximately one third the magnitude and directly orthogonal to the primary system.

  17. Study on image acquisition in 3-D sensor system of arc welding pool surface shape using grating projection

    Science.gov (United States)

    Ai, Xiaopu; Liu, Nansheng; Wei, Yiqing; Hu, Xian; Wei, Sheng; Liu, Xiaorui

    2009-11-01

    Detecting 3-D information on welding pool surface shape is difficult due to the arc light interference, high temperature radiation and pool surface specular reflection. The characteristics of mirror like reflection on pool of liquid surface are studied. Besides the way to obtain clear information-rich image of the pool area is discussed under the strong arc light. Because of the strong arc light above the pool will affect the imaging of the relatively weaker laser stripes seriously, we need to choose a suitable shooting angle and shooting distance to achieve well image. According to all these factors, the optimal combination of the sensing structure parameters in theory is deduced. Based on this work, a vision detecting of arc welding pool surface topography system was putted up in our laboratory, also actual measurement was carried out to obtain more clear images of deformation laser stripes in welding pool. This will provide the three-dimensional reconstruction a strong support.

  18. [EOS imaging acquisition system : 2D/3D diagnostics of the skeleton].

    Science.gov (United States)

    Tarhan, T; Froemel, D; Meurer, A

    2015-12-01

    The application spectrum of the EOS imaging acquisition system is versatile. It is especially useful in the diagnostics and planning of corrective surgical procedures in complex orthopedic cases. The application is indicated when assessing deformities and malpositions of the spine, pelvis and lower extremities. It can also be used in the assessment and planning of hip and knee arthroplasty. For the first time physicians have the opportunity to conduct examinations of the whole body under weight-bearing conditions in order to anticipate the effects of a planned surgical procedure on the skeletal system as a whole and therefore on the posture of the patient. Compared to conventional radiographic examination techniques, such as x-ray or computed tomography, the patient is exposed to much less radiation. Therefore, the pediatric application of this technique can be described as reasonable.

  19. Evaluating the bending response of two osseointegrated transfemoral implant systems using 3D digital image correlation.

    Science.gov (United States)

    Thompson, Melanie L; Backman, David; Branemark, Rickard; Mechefske, Chris K

    2011-05-01

    Osseointegrated transfemoral implants have been introduced as a prosthetic solution for above knee amputees. They have shown great promise, providing an alternative for individuals who could not be accommodated by conventional, socket-based prostheses; however, the occurrence of device failures is of concern. In an effort to improve the strength and longevity of the device, a new design has been proposed. This study investigates the mechanical behavior of the new taper-based assembly in comparison to the current hex-based connection for osseointegrated transfemoral implant systems. This was done to better understand the behavior of components under loading, in order to optimize the assembly specifications and improve the useful life of the system. Digital image correlation was used to measure surface strains on two assemblies during static loading in bending. This provided a means to measure deformation over the entire sample and identify critical locations as the assembly was subjected to a series of loading conditions. It provided a means to determine the effects of tightening specifications and connection geometry on the material response and mechanical behavior of the assemblies. Both osseoinegrated assemblies exhibited improved strength and mechanical performance when tightened to a level beyond the current specified tightening torque of 12 N m. This was shown by decreased strain concentration values and improved distribution of tensile strain. Increased tightening torque provides an improved connection between components regardless of design, leading to increased torque retention, decreased peak tensile strain values, and a more gradual, primarily compressive distribution of strains throughout the assembly.

  20. Mobile Biplane X-Ray Imaging System for Measuring 3D Dynamic Joint Motion During Overground Gait.

    Science.gov (United States)

    Guan, Shanyuanye; Gray, Hans A; Keynejad, Farzad; Pandy, Marcus G

    2016-01-01

    Most X-ray fluoroscopy systems are stationary and impose restrictions on the measurement of dynamic joint motion; for example, knee-joint kinematics during gait is usually measured with the subject ambulating on a treadmill. We developed a computer-controlled, mobile, biplane, X-ray fluoroscopy system to track human body movement for high-speed imaging of 3D joint motion during overground gait. A robotic gantry mechanism translates the two X-ray units alongside the subject, tracking and imaging the joint of interest as the subject moves. The main aim of the present study was to determine the accuracy with which the mobile imaging system measures 3D knee-joint kinematics during walking. In vitro experiments were performed to measure the relative positions of the tibia and femur in an intact human cadaver knee and of the tibial and femoral components of a total knee arthroplasty (TKA) implant during simulated overground gait. Accuracy was determined by calculating mean, standard deviation and root-mean-squared errors from differences between kinematic measurements obtained using volumetric models of the bones and TKA components and reference measurements obtained from metal beads embedded in the bones. Measurement accuracy was enhanced by the ability to track and image the joint concurrently. Maximum root-mean-squared errors were 0.33 mm and 0.65° for translations and rotations of the TKA knee and 0.78 mm and 0.77° for translations and rotations of the intact knee, which are comparable to results reported for treadmill walking using stationary biplane systems. System capability for in vivo joint motion measurement was also demonstrated for overground gait.

  1. NeuroTerrain – a client-server system for browsing 3D biomedical image data sets

    Directory of Open Access Journals (Sweden)

    Nissanov Jonathan

    2007-02-01

    Full Text Available Abstract Background Three dimensional biomedical image sets are becoming ubiquitous, along with the canonical atlases providing the necessary spatial context for analysis. To make full use of these 3D image sets, one must be able to present views for 2D display, either surface renderings or 2D cross-sections through the data. Typical display software is limited to presentations along one of the three orthogonal anatomical axes (coronal, horizontal, or sagittal. However, data sets precisely oriented along the major axes are rare. To make fullest use of these datasets, one must reasonably match the atlas' orientation; this involves resampling the atlas in planes matched to the data set. Traditionally, this requires the atlas and browser reside on the user's desktop; unfortunately, in addition to being monolithic programs, these tools often require substantial local resources. In this article, we describe a network-capable, client-server framework to slice and visualize 3D atlases at off-axis angles, along with an open client architecture and development kit to support integration into complex data analysis environments. Results Here we describe the basic architecture of a client-server 3D visualization system, consisting of a thin Java client built on a development kit, and a computationally robust, high-performance server written in ANSI C++. The Java client components (NetOStat support arbitrary-angle viewing and run on readily available desktop computers running Mac OS X, Windows XP, or Linux as a downloadable Java Application. Using the NeuroTerrain Software Development Kit (NT-SDK, sophisticated atlas browsing can be added to any Java-compatible application requiring as little as 50 lines of Java glue code, thus making it eminently re-useable and much more accessible to programmers building more complex, biomedical data analysis tools. The NT-SDK separates the interactive GUI components from the server control and monitoring, so as to support

  2. 3D Reconstruction of NMR Images

    Directory of Open Access Journals (Sweden)

    Peter Izak

    2007-01-01

    Full Text Available This paper introduces experiment of 3D reconstruction NMR images scanned from magnetic resonance device. There are described methods which can be used for 3D reconstruction magnetic resonance images in biomedical application. The main idea is based on marching cubes algorithm. For this task was chosen sophistication method by program Vision Assistant, which is a part of program LabVIEW.

  3. Clinical outcomes following spinal fusion using an intraoperative computed tomographic 3D imaging system.

    Science.gov (United States)

    Xiao, Roy; Miller, Jacob A; Sabharwal, Navin C; Lubelski, Daniel; Alentado, Vincent J; Healy, Andrew T; Mroz, Thomas E; Benzel, Edward C

    2017-03-03

    OBJECTIVE Improvements in imaging technology have steadily advanced surgical approaches. Within the field of spine surgery, assistance from the O-arm Multidimensional Surgical Imaging System has been established to yield superior accuracy of pedicle screw insertion compared with freehand and fluoroscopic approaches. Despite this evidence, no studies have investigated the clinical relevance associated with increased accuracy. Accordingly, the objective of this study was to investigate the clinical outcomes following thoracolumbar spinal fusion associated with O-arm-assisted navigation. The authors hypothesized that increased accuracy achieved with O-arm-assisted navigation decreases the rate of reoperation secondary to reduced hardware failure and screw misplacement. METHODS A consecutive retrospective review of all patients who underwent open thoracolumbar spinal fusion at a single tertiary-care institution between December 2012 and December 2014 was conducted. Outcomes assessed included operative time, length of hospital stay, and rates of readmission and reoperation. Mixed-effects Cox proportional hazards modeling, with surgeon as a random effect, was used to investigate the association between O-arm-assisted navigation and postoperative outcomes. RESULTS Among 1208 procedures, 614 were performed with O-arm-assisted navigation, 356 using freehand techniques, and 238 using fluoroscopic guidance. The most common indication for surgery was spondylolisthesis (56.2%), and most patients underwent a posterolateral fusion only (59.4%). Although O-arm procedures involved more vertebral levels compared with the combined freehand/fluoroscopy cohort (4.79 vs 4.26 vertebral levels; p fusion only (HR 0.39; p fusion (HR 0.22; p = 0.03), but not posterior/transforaminal lumbar interbody fusion. CONCLUSIONS To the authors' knowledge, the present study is the first to investigate clinical outcomes associated with O-arm-assisted navigation following thoracolumbar spinal fusion. O

  4. High-quality 3-D coronary artery imaging on an interventional C-arm x-ray system

    Energy Technology Data Exchange (ETDEWEB)

    Hansis, Eberhard; Carroll, John D.; Schaefer, Dirk; Doessel, Olaf; Grass, Michael [Philips Technologie GmbH Forschungslaboratorien, Roentgenstrasse 24-26, 22335 Hamburg (Germany); Department of Medicine, Division of Cardiology, Health Sciences Center, University of Colorado, Denver, Colorado 80262 (United States); Philips Technologie GmbH Forschungslaboratorien, Roentgenstrasse 24-26, 22335 Hamburg (Germany); Institute of Biomedical Engineering, University of Karlsruhe, Kaiserstr. 12, 76131 Karlsruhe (Germany); Philips Technologie GmbH Forschungslaboratorien, Roentgenstrasse 24-26, 22335 Hamburg (Germany)

    2010-04-15

    Purpose: Three-dimensional (3-D) reconstruction of the coronary arteries during a cardiac catheter-based intervention can be performed from a C-arm based rotational x-ray angiography sequence. It can support the diagnosis of coronary artery disease, treatment planning, and intervention guidance. 3-D reconstruction also enables quantitative vessel analysis, including vessel dynamics from a time-series of reconstructions. Methods: The strong angular undersampling and motion effects present in gated cardiac reconstruction necessitate the development of special reconstruction methods. This contribution presents a fully automatic method for creating high-quality coronary artery reconstructions. It employs a sparseness-prior based iterative reconstruction technique in combination with projection-based motion compensation. Results: The method is tested on a dynamic software phantom, assessing reconstruction accuracy with respect to vessel radii and attenuation coefficients. Reconstructions from clinical cases are presented, displaying high contrast, sharpness, and level of detail. Conclusions: The presented method enables high-quality 3-D coronary artery imaging on an interventional C-arm system.

  5. A 3-D ultrasound imaging robotic system to detect and quantify lower limb arterial stenoses: in vivo feasibility.

    Science.gov (United States)

    Janvier, Marie-Ange; Merouche, Samir; Allard, Louise; Soulez, Gilles; Cloutier, Guy

    2014-01-01

    The degree of stenosis is the most common criterion used to assess the severity of lower limb peripheral arterial disease. Two-dimensional ultrasound (US) imaging is the first-line diagnostic method for investigating lesions, but it cannot render a 3-D map of the entire lower limb vascular tree required for therapy planning. We propose a prototype 3-D US imaging robotic system that can potentially reconstruct arteries from the iliac in the lower abdomen down to the popliteal behind the knee. A realistic multi-modal vascular phantom was first conceptualized to evaluate the system's performance. Geometric accuracies were assessed in surface reconstruction and cross-sectional area in comparison to computed tomography angiography (CTA). A mean surface map error of 0.55 mm was recorded for 3-D US vessel representations, and cross-sectional lumen areas were congruent with CTA geometry. In the phantom study, stenotic lesions were properly localized and severe stenoses up to 98.3% were evaluated with -3.6 to 11.8% errors. The feasibility of the in vivo system in reconstructing the normal femoral artery segment of a volunteer and detecting stenoses on a femoral segment of a patient was also investigated and compared with that of CTA. Together, these results encourage future developments to increase the robot's potential to adequately represent lower limb vessels and clinically evaluate stenotic lesions for therapy planning and recurrent non-invasive and non-ionizing follow-up examinations. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  6. Multiplane 3D superresolution optical fluctuation imaging

    CERN Document Server

    Geissbuehler, Stefan; Godinat, Aurélien; Bocchio, Noelia L; Dubikovskaya, Elena A; Lasser, Theo; Leutenegger, Marcel

    2013-01-01

    By switching fluorophores on and off in either a deterministic or a stochastic manner, superresolution microscopy has enabled the imaging of biological structures at resolutions well beyond the diffraction limit. Superresolution optical fluctuation imaging (SOFI) provides an elegant way of overcoming the diffraction limit in all three spatial dimensions by computing higher-order cumulants of image sequences of blinking fluorophores acquired with a conventional widefield microscope. So far, three-dimensional (3D) SOFI has only been demonstrated by sequential imaging of multiple depth positions. Here we introduce a versatile imaging scheme which allows for the simultaneous acquisition of multiple focal planes. Using 3D cross-cumulants, we show that the depth sampling can be increased. Consequently, the simultaneous acquisition of multiple focal planes reduces the acquisition time and hence the photo-bleaching of fluorescent markers. We demonstrate multiplane 3D SOFI by imaging the mitochondria network in fixed ...

  7. Progress in 3D imaging and display by integral imaging

    Science.gov (United States)

    Martinez-Cuenca, R.; Saavedra, G.; Martinez-Corral, M.; Pons, A.; Javidi, B.

    2009-05-01

    Three-dimensionality is currently considered an important added value in imaging devices, and therefore the search for an optimum 3D imaging and display technique is a hot topic that is attracting important research efforts. As main value, 3D monitors should provide the observers with different perspectives of a 3D scene by simply varying the head position. Three-dimensional imaging techniques have the potential to establish a future mass-market in the fields of entertainment and communications. Integral imaging (InI), which can capture true 3D color images, has been seen as the right technology to 3D viewing to audiences of more than one person. Due to the advanced degree of development, InI technology could be ready for commercialization in the coming years. This development is the result of a strong research effort performed along the past few years by many groups. Since Integral Imaging is still an emerging technology, the first aim of the "3D Imaging and Display Laboratory" at the University of Valencia, has been the realization of a thorough study of the principles that govern its operation. Is remarkable that some of these principles have been recognized and characterized by our group. Other contributions of our research have been addressed to overcome some of the classical limitations of InI systems, like the limited depth of field (in pickup and in display), the poor axial and lateral resolution, the pseudoscopic-to-orthoscopic conversion, the production of 3D images with continuous relief, or the limited range of viewing angles of InI monitors.

  8. Stereoscopic uncooled thermal imaging with autostereoscopic 3D flat-screen display in military driving enhancement systems

    Science.gov (United States)

    Haan, H.; Münzberg, M.; Schwarzkopf, U.; de la Barré, R.; Jurk, S.; Duckstein, B.

    2012-06-01

    Thermal cameras are widely used in driver vision enhancement systems. However, in pathless terrain, driving becomes challenging without having a stereoscopic perception. Stereoscopic imaging is a well-known technique already for a long time with understood physical and physiological parameters. Recently, a commercial hype has been observed, especially in display techniques. The commercial market is already flooded with systems based on goggle-aided 3D-viewing techniques. However, their use is limited for military applications since goggles are not accepted by military users for several reasons. The proposed uncooled thermal imaging stereoscopic camera with a geometrical resolution of 640x480 pixel perfectly fits to the autostereoscopic display with a 1280x768 pixels. An eye tracker detects the position of the observer's eyes and computes the pixel positions for the left and the right eye. The pixels of the flat panel are located directly behind a slanted lenticular screen and the computed thermal images are projected into the left and the right eye of the observer. This allows a stereoscopic perception of the thermal image without any viewing aids. The complete system including camera and display is ruggedized. The paper discusses the interface and performance requirements for the thermal imager as well as for the display.

  9. Dynamic contrast-enhanced 3D photoacoustic imaging

    Science.gov (United States)

    Wong, Philip; Kosik, Ivan; Carson, Jeffrey J. L.

    2013-03-01

    Photoacoustic imaging (PAI) is a hybrid imaging modality that integrates the strengths from both optical imaging and acoustic imaging while simultaneously overcoming many of their respective weaknesses. In previous work, we reported on a real-time 3D PAI system comprised of a 32-element hemispherical array of transducers. Using the system, we demonstrated the ability to capture photoacoustic data, reconstruct a 3D photoacoustic image, and display select slices of the 3D image every 1.4 s, where each 3D image resulted from a single laser pulse. The present study aimed to exploit the rapid imaging speed of an upgraded 3D PAI system by evaluating its ability to perform dynamic contrast-enhanced imaging. The contrast dynamics can provide rich datasets that contain insight into perfusion, pharmacokinetics and physiology. We captured a series of 3D PA images of a flow phantom before and during injection of piglet and rabbit blood. Principal component analysis was utilized to classify the data according to its spatiotemporal information. The results suggested that this technique can be used to separate a sequence of 3D PA images into a series of images representative of main features according to spatiotemporal flow dynamics.

  10. Registration of 2D C-Arm and 3D CT Images for a C-Arm Image-Assisted Navigation System for Spinal Surgery

    Directory of Open Access Journals (Sweden)

    Chih-Ju Chang

    2015-01-01

    Full Text Available C-Arm image-assisted surgical navigation system has been broadly applied to spinal surgery. However, accurate path planning on the C-Arm AP-view image is difficult. This research studies 2D-3D image registration methods to obtain the optimum transformation matrix between C-Arm and CT image frames. Through the transformation matrix, the surgical path planned on preoperative CT images can be transformed and displayed on the C-Arm images for surgical guidance. The positions of surgical instruments will also be displayed on both CT and C-Arm in the real time. Five similarity measure methods of 2D-3D image registration including Normalized Cross-Correlation, Gradient Correlation, Pattern Intensity, Gradient Difference Correlation, and Mutual Information combined with three optimization methods including Powell’s method, Downhill simplex algorithm, and genetic algorithm are applied to evaluate their performance in converge range, efficiency, and accuracy. Experimental results show that the combination of Normalized Cross-Correlation measure method with Downhill simplex algorithm obtains maximum correlation and similarity in C-Arm and Digital Reconstructed Radiograph (DRR images. Spine saw bones are used in the experiment to evaluate 2D-3D image registration accuracy. The average error in displacement is 0.22 mm. The success rate is approximately 90% and average registration time takes 16 seconds.

  11. Structured light field 3D imaging.

    Science.gov (United States)

    Cai, Zewei; Liu, Xiaoli; Peng, Xiang; Yin, Yongkai; Li, Ameng; Wu, Jiachen; Gao, Bruce Z

    2016-09-05

    In this paper, we propose a method by means of light field imaging under structured illumination to deal with high dynamic range 3D imaging. Fringe patterns are projected onto a scene and modulated by the scene depth then a structured light field is detected using light field recording devices. The structured light field contains information about ray direction and phase-encoded depth, via which the scene depth can be estimated from different directions. The multidirectional depth estimation can achieve high dynamic 3D imaging effectively. We analyzed and derived the phase-depth mapping in the structured light field and then proposed a flexible ray-based calibration approach to determine the independent mapping coefficients for each ray. Experimental results demonstrated the validity of the proposed method to perform high-quality 3D imaging for highly and lowly reflective surfaces.

  12. Full Parallax Integral 3D Display and Image Processing Techniques

    Directory of Open Access Journals (Sweden)

    Byung-Gook Lee

    2015-02-01

    Full Text Available Purpose – Full parallax integral 3D display is one of the promising future displays that provide different perspectives according to viewing direction. In this paper, the authors review the recent integral 3D display and image processing techniques for improving the performance, such as viewing resolution, viewing angle, etc.Design/methodology/approach – Firstly, to improve the viewing resolution of 3D images in the integral imaging display with lenslet array, the authors present 3D integral imaging display with focused mode using the time-multiplexed display. Compared with the original integral imaging with focused mode, the authors use the electrical masks and the corresponding elemental image set. In this system, the authors can generate the resolution-improved 3D images with the n×n pixels from each lenslet by using n×n time-multiplexed display. Secondly, a new image processing technique related to the elemental image generation for 3D scenes is presented. With the information provided by the Kinect device, the array of elemental images for an integral imaging display is generated.Findings – From their first work, the authors improved the resolution of 3D images by using the time-multiplexing technique through the demonstration of the 24 inch integral imaging system. Authors’ method can be applied to a practical application. Next, the proposed method with the Kinect device can gain a competitive advantage over other methods for the capture of integral images of big 3D scenes. The main advantage of fusing the Kinect and the integral imaging concepts is the acquisition speed, and the small amount of handled data.Originality / Value – In this paper, the authors review their recent methods related to integral 3D display and image processing technique.Research type – general review.

  13. Advances and considerations in technologies for growing, imaging, and analyzing 3-D root system architecture

    Science.gov (United States)

    The ability of a plant to mine the soil for nutrients and water is determined by how, where, and when roots are arranged in the soil matrix. The capacity of plant to maintain or improve its yield under limiting conditions, such as nutrient deficiency or drought, is affected by root system architectu...

  14. 3D passive integral imaging using compressive sensing.

    Science.gov (United States)

    Cho, Myungjin; Mahalanobis, Abhijit; Javidi, Bahram

    2012-11-19

    Passive 3D sensing using integral imaging techniques has been well studied in the literature. It has been shown that a scene can be reconstructed at various depths using several 2D elemental images. This provides the ability to reconstruct objects in the presence of occlusions, and passively estimate their 3D profile. However, high resolution 2D elemental images are required for high quality 3D reconstruction. Compressive Sensing (CS) provides a way to dramatically reduce the amount of data that needs to be collected to form the elemental images, which in turn can reduce the storage and bandwidth requirements. In this paper, we explore the effects of CS in acquisition of the elemental images, and ultimately on passive 3D scene reconstruction and object recognition. Our experiments show that the performance of passive 3D sensing systems remains robust even when elemental images are recovered from very few compressive measurements.

  15. Commissioning of a 3D image-based treatment planning system for high-dose-rate brachytherapy of cervical cancer.

    Science.gov (United States)

    Kim, Yongbok; Modrick, Joseph M; Pennington, Edward C; Kim, Yusung

    2016-03-08

    The objective of this work is to present commissioning procedures to clinically implement a three-dimensional (3D), image-based, treatment-planning system (TPS) for high-dose-rate (HDR) brachytherapy (BT) for gynecological (GYN) cancer. The physical dimensions of the GYN applicators and their values in the virtual applicator library were varied by 0.4 mm of their nominal values. Reconstruction uncertainties of the titanium tandem and ovoids (T&O) were less than 0.4 mm on CT phantom studies and on average between 0.8-1.0 mm on MRI when compared with X-rays. In-house software, HDRCalculator, was developed to check HDR plan parameters such as independently verifying active tandem or cylinder probe length and ovoid or cylinder size, source calibration and treatment date, and differences between average Point A dose and prescription dose. Dose-volume histograms were validated using another independent TPS. Comprehensive procedures to commission volume optimization algorithms and process in 3D image-based planning were presented. For the difference between line and volume optimizations, the average absolute differences as a percentage were 1.4% for total reference air KERMA (TRAK) and 1.1% for Point A dose. Volume optimization consistency tests between versions resulted in average absolute differences in 0.2% for TRAK and 0.9 s (0.2%) for total treatment time. The data revealed that the optimizer should run for at least 1 min in order to avoid more than 0.6% dwell time changes. For clinical GYN T&O cases, three different volume optimization techniques (graphical optimization, pure inverse planning, and hybrid inverse optimization) were investigated by comparing them against a conventional Point A technique. End-to-end testing was performed using a T&O phantom to ensure no errors or inconsistencies occurred from imaging through to planning and delivery. The proposed commissioning procedures provide a clinically safe implementation technique for 3D image-based TPS for HDR

  16. Light field display and 3D image reconstruction

    Science.gov (United States)

    Iwane, Toru

    2016-06-01

    Light field optics and its applications become rather popular in these days. With light field optics or light field thesis, real 3D space can be described in 2D plane as 4D data, which we call as light field data. This process can be divided in two procedures. First, real3D scene is optically reduced with imaging lens. Second, this optically reduced 3D image is encoded into light field data. In later procedure we can say that 3D information is encoded onto a plane as 2D data by lens array plate. This transformation is reversible and acquired light field data can be decoded again into 3D image with the arrayed lens plate. "Refocusing" (focusing image on your favorite point after taking a picture), light-field camera's most popular function, is some kind of sectioning process from encoded 3D data (light field data) to 2D image. In this paper at first I show our actual light field camera and our 3D display using acquired and computer-simulated light field data, on which real 3D image is reconstructed. In second I explain our data processing method whose arithmetic operation is performed not in Fourier domain but in real domain. Then our 3D display system is characterized by a few features; reconstructed image is of finer resolutions than density of arrayed lenses and it is not necessary to adjust lens array plate to flat display on which light field data is displayed.

  17. A new efficient 2D combined with 3D CAD system for solitary pulmonary nodule detection in CT images

    Directory of Open Access Journals (Sweden)

    Xing Li

    2011-06-01

    Full Text Available Lung cancer has become one of the leading causes of death in the world. Clear evidence shows that early discovery, early diagnosis and early treatment of lung cancer can significantly increase the chance of survival for patients. Lung Computer-Aided Diagnosis (CAD is a potential method to accomplish a range of quantitative tasks such as early cancer and disease detection. Many computer-aided diagnosis (CAD methods, including 2D and 3D approaches, have been proposed for solitary pulmonary nodules (SPNs. However, the detection and diagnosis of SPNs remain challenging in many clinical circumstances. One goal of this work is to develop a two-stage approach that combines the simplicity of 2D and the accuracy of 3D methods. The experimental results show statistically significant differences between the diagnostic accuracy of 2D and 3Dmethods. The results also show that with a very minor drop in diagnostic performance the two-stage approach can significantly reduce the number of nodules needed to be processed by the 3D method, streamlining the computational demand. Finally, all malignant nodules were detected and a very low false-positive detection rate was achieved. The automated extraction of the lung in CT images is the most crucial step in a computer-aided diagnosis (CAD system. In this paper we describe a method, consisting of appropriate techniques, for the automated identification of the pulmonary volume. The performance is evaluated as a fully automated computerized method for the detection of lung nodules in computed tomography (CT scans in the identification of lung cancers that may be missed during visual interpretation.

  18. 3D ultrasound imaging for prosthesis fabrication and diagnostic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Morimoto, A.K.; Bow, W.J.; Strong, D.S. [and others

    1995-06-01

    The fabrication of a prosthetic socket for a below-the-knee amputee requires knowledge of the underlying bone structure in order to provide pressure relief for sensitive areas and support for load bearing areas. The goal is to enable the residual limb to bear pressure with greater ease and utility. Conventional methods of prosthesis fabrication are based on limited knowledge about the patient`s underlying bone structure. A 3D ultrasound imaging system was developed at Sandia National Laboratories. The imaging system provides information about the location of the bones in the residual limb along with the shape of the skin surface. Computer assisted design (CAD) software can use this data to design prosthetic sockets for amputees. Ultrasound was selected as the imaging modality. A computer model was developed to analyze the effect of the various scanning parameters and to assist in the design of the overall system. The 3D ultrasound imaging system combines off-the-shelf technology for image capturing, custom hardware, and control and image processing software to generate two types of image data -- volumetric and planar. Both volumetric and planar images reveal definition of skin and bone geometry with planar images providing details on muscle fascial planes, muscle/fat interfaces, and blood vessel definition. The 3D ultrasound imaging system was tested on 9 unilateral below-the- knee amputees. Image data was acquired from both the sound limb and the residual limb. The imaging system was operated in both volumetric and planar formats. An x-ray CT (Computed Tomography) scan was performed on each amputee for comparison. Results of the test indicate beneficial use of ultrasound to generate databases for fabrication of prostheses at a lower cost and with better initial fit as compared to manually fabricated prostheses.

  19. Heat Equation to 3D Image Segmentation

    Directory of Open Access Journals (Sweden)

    Nikolay Sirakov

    2006-04-01

    Full Text Available This paper presents a new approach, capable of 3D image segmentation and objects' surface reconstruction. The main advantages of the method are: large capture range; quick segmentation of a 3D scene/image to regions; multiple 3D objects reconstruction. The method uses centripetal force and penalty function to segment the entire 3D scene/image to regions containing a single 3D object. Each region is inscribed in a convex, smooth closed surface, which defines a centripetal force. Then the surface is evolved by the geometric heat differential equation toward the force's direction. The penalty function is defined to stop evolvement of those surface patches, whose normal vectors encountered object's surface. On the base of the theoretical model Forward Difference Algorithm was developed and coded by Mathematica. Stability convergence condition, truncation error and calculation complexity of the algorithm are determined. The obtained results, advantages and disadvantages of the method are discussed at the end of this paper.

  20. Imaging the Roots of Geothermal Systems: 3-D Inversion of Magnetotelluric Array Data in the Taupo Volcanic Zone, New Zealand

    Science.gov (United States)

    Bertrand, E. A.; Caldwell, G.; Bannister, S. C.; Hill, G.; Bennie, S.

    2013-12-01

    The Taupo Volcanic Zone (TVZ), located in the central North Island of New Zealand, is a rifted arc that contains more than 20 liquid-dominated high-temperature geothermal systems, which together discharge ~4.2 GW of heat at the surface. The shallow (upper ~500 m) extent of these geothermal systems is marked by low-resistivity, mapped by tens-of-thousands of DC resistivity measurements collected throughout the 1970's and 80's. Conceptual models of heat transport through the brittle crust of the TVZ link these low-resistivity anomalies to the tops of vertically ascending plumes of convecting hydrothermal fluid. Recently, data from a 40-site array of broadband seismometers with ~4 km station spacing, and an array of 270 broadband magnetotelluric (MT) measurements with ~2 km station spacing, have been collected in the south-eastern part of the TVZ in an experiment to image the deep structure (or roots) of the geothermal systems in this region. Unlike DC resistivity, these MT measurements are capable of resolving the resistivity structure of the Earth to depths of 10 km or more. 2-D and 3-D models of subsets of these MT data have been used to provide the first-ever images of quasi-vertical low-resistivity zones (at depths of 3-7 km) that connect with the near-surface geothermal fields. These low-resistivity zones are interpreted to represent convection plumes of high-temperature fluids ascending within fractures, which supply heat to the overlying geothermal fields. At the Rotokawa, Ngatamariki and Ohaaki geothermal fields, these plumes extend to a broad layer of low-resistivity, inferred to represent a magmatic, basal heat source located below the seismogenic zone (at ~7-8 km depth) that drives convection in the brittle crust above. Little is known about the mechanisms that transfer heat into the hydrothermal regime. However, at Rotokawa, new 3-D resistivity models image a vertical low-resistivity zone that lies directly beneath the geothermal field. The top of this

  1. New microangiography system development providing improved small vessel imaging, increased contrast-to-noise ratios, and multiview 3D reconstructions

    Science.gov (United States)

    Kuhls, Andrew T.; Patel, Vikas; Ionita, Ciprian; Noël, Peter B.; Walczak, Alan M.; Rangwala, Hussain S.; Hoffmann, Kenneth R.; Rudin, Stephen

    2006-03-01

    A new microangiographic system (MA) integrated into a c-arm gantry has been developed allowing precise placement of a MA at the exact same angle as the standard x-ray image intensifier (II) with unchanged source and object position. The MA can also be arbitrarily moved about the object and easily moved into the field of view (FOV) in front of the lower resolution II when higher resolution angiographic sequences are needed. The benefits of this new system are illustrated in a neurovascular study, where a rabbit is injected with contrast media for varying oblique angles. Digital subtraction angiographic (DSA) images were obtained and compared using both the MA and II detectors for the same projection view. Vessels imaged with the MA appear sharper with smaller vessels visualized. Visualization of ~100 μm vessels was possible with the MA whereas not with the II. Further, the MA could better resolve vessel overlap. Contrast to noise ratios (CNR) were calculated for vessels of varying sizes for the MA versus the II and were found to be similar for large vessels, approximately double for medium vessels, and infinitely better for the smallest vessels. In addition, a 3D reconstruction of selected vessel segments was performed, using multiple (three) projections at oblique angles, for each detector. This new MA/II integrated system should lead to improved diagnosis and image guidance of neurovascular interventions by enabling initial guidance with the low resolution large FOV II combined with use of the high resolution MA during critical parts of diagnostic and interventional procedures.

  2. Feasibility of 3D harmonic contrast imaging

    NARCIS (Netherlands)

    Voormolen, M.M.; Bouakaz, A.; Krenning, B.J.; Lancée, C.; ten Cate, F.; de Jong, N.

    2004-01-01

    Improved endocardial border delineation with the application of contrast agents should allow for less complex and faster tracing algorithms for left ventricular volume analysis. We developed a fast rotating phased array transducer for 3D imaging of the heart with harmonic capabilities making it

  3. Monocular 3D display system for presenting correct depth

    Science.gov (United States)

    Sakamoto, Kunio; Hosomi, Takashi

    2009-10-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth.

  4. 3D Membrane Imaging and Porosity Visualization

    KAUST Repository

    Sundaramoorthi, Ganesh

    2016-03-03

    Ultrafiltration asymmetric porous membranes were imaged by two microscopy methods, which allow 3D reconstruction: Focused Ion Beam and Serial Block Face Scanning Electron Microscopy. A new algorithm was proposed to evaluate porosity and average pore size in different layers orthogonal and parallel to the membrane surface. The 3D-reconstruction enabled additionally the visualization of pore interconnectivity in different parts of the membrane. The method was demonstrated for a block copolymer porous membrane and can be extended to other membranes with application in ultrafiltration, supports for forward osmosis, etc, offering a complete view of the transport paths in the membrane.

  5. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    Science.gov (United States)

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  6. Aerial Images from AN Uav System: 3d Modeling and Tree Species Classification in a Park Area

    Science.gov (United States)

    Gini, R.; Passoni, D.; Pinto, L.; Sona, G.

    2012-07-01

    The use of aerial imagery acquired by Unmanned Aerial Vehicles (UAVs) is scheduled within the FoGLIE project (Fruition of Goods Landscape in Interactive Environment): it starts from the need to enhance the natural, artistic and cultural heritage, to produce a better usability of it by employing audiovisual movable systems of 3D reconstruction and to improve monitoring procedures, by using new media for integrating the fruition phase with the preservation ones. The pilot project focus on a test area, Parco Adda Nord, which encloses various goods' types (small buildings, agricultural fields and different tree species and bushes). Multispectral high resolution images were taken by two digital compact cameras: a Pentax Optio A40 for RGB photos and a Sigma DP1 modified to acquire the NIR band. Then, some tests were performed in order to analyze the UAV images' quality with both photogrammetric and photo-interpretation purposes, to validate the vector-sensor system, the image block geometry and to study the feasibility of tree species classification. Many pre-signalized Control Points were surveyed through GPS to allow accuracy analysis. Aerial Triangulations (ATs) were carried out with photogrammetric commercial software, Leica Photogrammetry Suite (LPS) and PhotoModeler, with manual or automatic selection of Tie Points, to pick out pros and cons of each package in managing non conventional aerial imagery as well as the differences in the modeling approach. Further analysis were done on the differences between the EO parameters and the corresponding data coming from the on board UAV navigation system.

  7. MARVIN : high speed 3D imaging for seedling classification

    NARCIS (Netherlands)

    Koenderink, N.J.J.P.; Wigham, M.L.I.; Golbach, F.B.T.F.; Otten, G.W.; Gerlich, R.J.H.; Zedde, van de H.J.

    2009-01-01

    The next generation of automated sorting machines for seedlings demands 3D models of the plants to be made at high speed and with high accuracy. In our system the 3D plant model is created based on the information of 24 RGB cameras. Our contribution is an image acquisition technique based on

  8. 3D Motion Parameters Determination Based on Binocular Sequence Images

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Exactly capturing three dimensional (3D) motion information of an object is an essential and important task in computer vision, and is also one of the most difficult problems. In this paper, a binocular vision system and a method for determining 3D motion parameters of an object from binocular sequence images are introduced. The main steps include camera calibration, the matching of motion and stereo images, 3D feature point correspondences and resolving the motion parameters. Finally, the experimental results of acquiring the motion parameters of the objects with uniform velocity and acceleration in the straight line based on the real binocular sequence images by the mentioned method are presented.

  9. 3D-image theater system using TLP770J LCD data projector; Ekisho data projector wo mochiita rittai eizo theater system

    Energy Technology Data Exchange (ETDEWEB)

    Kawasato, H. [Toshiba Corp., Tokyo (Japan)

    2000-02-01

    In today's multimedia era, visual systems are widely used not only for two-dimensional images but also for the depiction of virtual reality and for simulated three-dimensional images. At the same time, the projection technology used in large-screen projectors is shifting from the cathode ray tube (CRT) to the liquid crystal display (LCD). Toshiba has developed a simplified 3D-image theater system using the TLP770J LCD data projector, which offers easy maintenance and lower costs. (author)

  10. An interactive multiview 3D display system

    Science.gov (United States)

    Zhang, Zhaoxing; Geng, Zheng; Zhang, Mei; Dong, Hui

    2013-03-01

    The progresses in 3D display systems and user interaction technologies will help more effective 3D visualization of 3D information. They yield a realistic representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them. In this paper, we describe an autostereoscopic multiview 3D display system with capability of real-time user interaction. Design principle of this autostereoscopic multiview 3D display system is presented, together with the details of its hardware/software architecture. A prototype is built and tested based upon multi-projectors and horizontal optical anisotropic display structure. Experimental results illustrate the effectiveness of this novel 3D display and user interaction system.

  11. 3D quantitative phase imaging of neural networks using WDT

    Science.gov (United States)

    Kim, Taewoo; Liu, S. C.; Iyer, Raj; Gillette, Martha U.; Popescu, Gabriel

    2015-03-01

    White-light diffraction tomography (WDT) is a recently developed 3D imaging technique based on a quantitative phase imaging system called spatial light interference microscopy (SLIM). The technique has achieved a sub-micron resolution in all three directions with high sensitivity granted by the low-coherence of a white-light source. Demonstrations of the technique on single cell imaging have been presented previously; however, imaging on any larger sample, including a cluster of cells, has not been demonstrated using the technique. Neurons in an animal body form a highly complex and spatially organized 3D structure, which can be characterized by neuronal networks or circuits. Currently, the most common method of studying the 3D structure of neuron networks is by using a confocal fluorescence microscope, which requires fluorescence tagging with either transient membrane dyes or after fixation of the cells. Therefore, studies on neurons are often limited to samples that are chemically treated and/or dead. WDT presents a solution for imaging live neuron networks with a high spatial and temporal resolution, because it is a 3D imaging method that is label-free and non-invasive. Using this method, a mouse or rat hippocampal neuron culture and a mouse dorsal root ganglion (DRG) neuron culture have been imaged in order to see the extension of processes between the cells in 3D. Furthermore, the tomogram is compared with a confocal fluorescence image in order to investigate the 3D structure at synapses.

  12. An ultrasound tomography system with polyvinyl alcohol (PVA) moldings for coupling: in vivo results for 3-D pulse-echo imaging of the female breast.

    Science.gov (United States)

    Koch, Andreas; Stiller, Florian; Lerch, Reinhard; Ermert, Helmut

    2015-02-01

    Full-angle spatial compounding (FASC) is a concept for pulse-echo imaging using an ultrasound tomography (UST) system. With FASC, resolution is increased and speckles are suppressed by averaging pulse-echo data from 360°. In vivo investigations have already shown a great potential for 2-D FASC in the female breast as well as for finger-joint imaging. However, providing a small number of images of parallel cross-sectional planes with enhanced image quality is not sufficient for diagnosis. Therefore, volume data (3-D) is needed. For this purpose, we further developed our UST add-on system to automatically rotate a motorized array (3-D probe) around the object of investigation. Full integration of external motor and ultrasound electronics control in a custom-made program allows acquisition of 3-D pulse-echo RF datasets within 10 min. In case of breast cancer imaging, this concept also enables imaging of near-thorax tissue regions which cannot be achieved by 2-D FASC. Furthermore, moldings made of polyvinyl alcohol hydrogel (PVA-H) have been developed as a new acoustic coupling concept. It has a great potential to replace the water bath technique in UST, which is a critical concept with respect to clinical investigations. In this contribution, we present in vivo results for 3-D FASC applied to imaging a female breast which has been placed in a PVA-H molding during data acquisition. An algorithm is described to compensate time-of-flight and consider refraction at the water-PVA-H molding and molding-tissue interfaces. Therefore, the mean speed of sound (SOS) for the breast tissue is estimated with an image-based method. Our results show that the PVA-H molding concept is applicable and feasible and delivers good results. 3-D FASC is superior to 2-D FASC and provides 3-D volume data at increased image quality.

  13. 3D Image Modelling and Specific Treatments in Orthodontics Domain

    Directory of Open Access Journals (Sweden)

    Dionysis Goularas

    2007-01-01

    Full Text Available In this article, we present a 3D specific dental plaster treatment system for orthodontics. From computer tomography scanner images, we propose first a 3D image modelling and reconstruction method of the Mandible and Maxillary based on an adaptive triangulation allowing management of contours meant for the complex topologies. Secondly, we present two specific treatment methods directly achieved on obtained 3D model allowing the automatic correction for the setting in occlusion of the Mandible and the Maxillary, and the teeth segmentation allowing more specific dental examinations. Finally, these specific treatments are presented via a client/server application with the aim of allowing a telediagnosis and treatment.

  14. 3D reconstruction, visualization, and measurement of MRI images

    Science.gov (United States)

    Pandya, Abhijit S.; Patel, Pritesh P.; Desai, Mehul B.; Desai, Paramtap

    1999-03-01

    This paper primarily focuses on manipulating 2D medical image data that often come in as Magnetic Resonance and reconstruct them into 3D volumetric images. Clinical diagnosis and therapy planning using 2D medical images can become a torturous problem for a physician. For example, our 2D breast images of a patient mimic a breast carcinoma. In reality, the patient has 'fat necrosis', a benign breast lump. Physicians need powerful, accurate and interactive 3D visualization systems to extract anatomical details and examine the root cause of the problem. Our proposal overcomes the above mentioned limitations through the development of volume rendering algorithms and extensive use of parallel, distributed and neural networks computing strategies. MRI coupled with 3D imaging provides a reliable method for quantifying 'fat necrosis' characteristics and progression. Our 3D interactive application enables a physician to compute spatial measurements and quantitative evaluations and, from a general point of view, use all 3D interactive tools that can help to plan a complex surgical operation. The capability of our medical imaging application can be extended to reconstruct and visualize 3D volumetric brain images. Our application promises to be an important tool in neurological surgery planning, time and cost reduction.

  15. Progresses in 3D integral imaging with optical processing

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-Corral, Manuel; Martinez-Cuenca, Raul; Saavedra, Genaro; Navarro, Hector; Pons, Amparo [Department of Optics. University of Valencia. Calle Doctor Moliner 50, E46 100, Burjassot (Spain); Javidi, Bahram [Electrical and Computer Engineering Department, University of Connecticut, Storrs, CT 06269-1157 (United States)], E-mail: manuel.martinez@uv.es

    2008-11-01

    Integral imaging is a promising technique for the acquisition and auto-stereoscopic display of 3D scenes with full parallax and without the need of any additional devices like special glasses. First suggested by Lippmann in the beginning of the 20th century, integral imaging is based in the intersection of ray cones emitted by a collection of 2D elemental images which store the 3D information of the scene. This paper is devoted to the study, from the ray optics point of view, of the optical effects and interaction with the observer of integral imaging systems.

  16. An Effective 3D Ear Acquisition System.

    Directory of Open Access Journals (Sweden)

    Yahui Liu

    Full Text Available The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition.

  17. An Effective 3D Ear Acquisition System.

    Science.gov (United States)

    Liu, Yahui; Lu, Guangming; Zhang, David

    2015-01-01

    The human ear is a new feature in biometrics that has several merits over the more common face, fingerprint and iris biometrics. It can be easily captured from a distance without a fully cooperative subject. Also, the ear has a relatively stable structure that does not change much with the age and facial expressions. In this paper, we present a novel method of 3D ear acquisition system by using triangulation imaging principle, and the experiment results show that this design is efficient and can be used for ear recognition.

  18. Single-lens 3D digital image correlation system based on a bilateral telecentric lens and a bi-prism: Systematic error analysis and correction

    Science.gov (United States)

    Wu, Lifu; Zhu, Jianguo; Xie, Huimin; Zhou, Mengmeng

    2016-12-01

    Recently, we proposed a single-lens 3D digital image correlation (3D DIC) method and established a measurement system on the basis of a bilateral telecentric lens (BTL) and a bi-prism. This system can retrieve the 3D morphology of a target and measure its deformation using a single BTL with relatively high accuracy. Nevertheless, the system still suffers from systematic errors caused by manufacturing deficiency of the bi-prism and distortion of the BTL. In this study, in-depth evaluations of these errors and their effects on the measurement results are performed experimentally. The bi-prism deficiency and the BTL distortion are characterized by two in-plane rotation angles and several distortion coefficients, respectively. These values are obtained from a calibration process using a chessboard placed into the field of view of the system; this process is conducted after the measurement of tested specimen. A modified mathematical model is proposed, which takes these systematic errors into account and corrects them during 3D reconstruction. Experiments on retrieving the 3D positions of the chessboard grid corners and the morphology of a ceramic plate specimen are performed. The results of the experiments reveal that ignoring the bi-prism deficiency will induce attitude error to the retrieved morphology, and the BTL distortion can lead to its pseudo out-of-plane deformation. Correcting these problems can further improve the measurement accuracy of the bi-prism-based single-lens 3D DIC system.

  19. Efficient fully 3D list-mode TOF PET image reconstruction using a factorized system matrix with an image domain resolution model.

    Science.gov (United States)

    Zhou, Jian; Qi, Jinyi

    2014-02-07

    A factorized system matrix utilizing an image domain resolution model is attractive in fully 3D time-of-flight PET image reconstruction using list-mode data. In this paper, we study a factored model based on sparse matrix factorization that is comprised primarily of a simplified geometrical projection matrix and an image blurring matrix. Beside the commonly-used Siddon's ray-tracer, we propose another more simplified geometrical projector based on the Bresenham's ray-tracer which further reduces the computational cost. We discuss in general how to obtain an image blurring matrix associated with a geometrical projector, and provide theoretical analysis that can be used to inspect the efficiency in model factorization. In simulation studies, we investigate the performance of the proposed sparse factorization model in terms of spatial resolution, noise properties and computational cost. The quantitative results reveal that the factorization model can be as efficient as a non-factored model, while its computational cost can be much lower. In addition we conduct Monte Carlo simulations to identify the conditions under which the image resolution model can become more efficient in terms of image contrast recovery. We verify our observations using the provided theoretical analysis. The result offers a general guide to achieve the optimal reconstruction performance based on a sparse factorization model with an image domain resolution model.

  20. Efficient fully 3D list-mode TOF PET image reconstruction using a factorized system matrix with an image domain resolution model

    Science.gov (United States)

    Zhou, Jian; Qi, Jinyi

    2014-01-01

    A factorized system matrix utilizing an image domain resolution model is attractive in fully 3D TOF PET image reconstruction using list-mode data. In this paper, we study a factored model based on sparse matrix factorization that is comprised primarily of a simplified geometrical projection matrix and an image blurring matrix. Beside the commonly-used Siddon's raytracer, we propose another more simplified geometrical projector based on the Bresenham's raytracer which further reduces the computational cost. We discuss in general how to obtain an image blurring matrix associated with a geometrical projector, and provide theoretical analysis that can be used to inspect the efficiency in model factorization. In simulation studies, we investigate the performance of the proposed sparse factorization model in terms of spatial resolution, noise properties and computational cost. The quantitative results reveal that the factorization model can be as efficient as a nonfactored model such as the analytical model while its computational cost can be much lower. In addition we conduct Monte Carlo simulations to identify the conditions under which the image resolution model can become more efficient in terms of image contrast recovery. We verify our observations using the provided theoretical analysis. The result offers a general guide to achieve optimal reconstruction performance based on a sparse factorization model with an only image domain resolution model. PMID:24434568

  1. The 3D Pelvic Inclination Correction System (PICS): A universally applicable coordinate system for isovolumetric imaging measurements, tested in women with pelvic organ prolapse (POP).

    Science.gov (United States)

    Reiner, Caecilia S; Williamson, Tom; Winklehner, Thomas; Lisse, Sean; Fink, Daniel; DeLancey, John O L; Betschart, Cornelia

    2017-07-01

    In pelvic organ prolapse (POP), the organs are pushed downward along the lines of gravity, so measurements along this longitudinal body axis are desirable. We propose a universally applicable 3D coordinate system that corrects for changes in pelvic inclination and that allows the localization of any point in the pelvis at rest or under dynamic conditions on magnetic resonance images (MRI) of pelvic floor disorders in a scanner- and software independent manner. The proposed 3D coordinate system called 3D Pelvic Inclination Correction System (PICS) is constructed utilizing four bony landmark points, with the origin set at the inferior pubic point, and three additional points at the sacrum (sacrococcygeal joint) and both ischial spines, which are clearly visible on MRI images. The feasibility and applicability of the moving frame was evaluated using MRI datasets from five women with pelvic organ prolapse, three undergoing static MRI and two undergoing dynamic MRI of the pelvic floor in a supine position. The construction of the coordinate system was performed utilizing the selected landmarks, with an initial implementation completed in MATLAB. In all cases the selected landmarks were clearly visible, with the construction of the 3D PICS and measurement of pelvic organ positions performed without difficulty. The resulting distance from the organ position to the horizontal PICS plane was compared to a traditional measure based on standard measurements in 2D slices. The two approaches demonstrated good agreement in each of the cases. The developed approach makes quantitative assessment of pelvic organ position in a physiologically relevant 3D coordinate system possible independent of pelvic movement relative to the scanner. It allows the accurate study of the physiologic range of organ location along the body axis ("up or down") as well as defects of the pelvic sidewall or birth-related pelvic floor injuries outside the midsagittal plane, not possible before in a 2D

  2. 3D Medical Image Segmentation Based on Rough Set Theory

    Institute of Scientific and Technical Information of China (English)

    CHEN Shi-hao; TIAN Yun; WANG Yi; HAO Chong-yang

    2007-01-01

    This paper presents a method which uses multiple types of expert knowledge together in 3D medical image segmentation based on rough set theory. The focus of this paper is how to approximate a ROI (region of interest) when there are multiple types of expert knowledge. Based on rough set theory, the image can be split into three regions:positive regions; negative regions; boundary regions. With multiple knowledge we refine ROI as an intersection of all of the expected shapes with single knowledge. At last we show the results of implementing a rough 3D image segmentation and visualization system.

  3. Building 3D scenes from 2D image sequences

    Science.gov (United States)

    Cristea, Paul D.

    2006-05-01

    Sequences of 2D images, taken by a single moving video receptor, can be fused to generate a 3D representation. This dynamic stereopsis exists in birds and reptiles, whereas the static binocular stereopsis is common in mammals, including humans. Most multimedia computer vision systems for stereo image capture, transmission, processing, storage and retrieval are based on the concept of binocularity. As a consequence, their main goal is to acquire, conserve and enhance pairs of 2D images able to generate a 3D visual perception in a human observer. Stereo vision in birds is based on the fusion of images captured by each eye, with previously acquired and memorized images from the same eye. The process goes on simultaneously and conjointly for both eyes and generates an almost complete all-around visual field. As a consequence, the baseline distance is no longer fixed, as in the case of binocular 3D view, but adjustable in accordance with the distance to the object of main interest, allowing a controllable depth effect. Moreover, the synthesized 3D scene can have a better resolution than each individual 2D image in the sequence. Compression of 3D scenes can be achieved, and stereo transmissions with lower bandwidth requirements can be developed.

  4. Photogrammetric 3D reconstruction using mobile imaging

    Science.gov (United States)

    Fritsch, Dieter; Syll, Miguel

    2015-03-01

    In our paper we demonstrate the development of an Android Application (AndroidSfM) for photogrammetric 3D reconstruction that works on smartphones and tablets likewise. The photos are taken with mobile devices, and can thereafter directly be calibrated using standard calibration algorithms of photogrammetry and computer vision, on that device. Due to still limited computing resources on mobile devices, a client-server handshake using Dropbox transfers the photos to the sever to run AndroidSfM for the pose estimation of all photos by Structure-from-Motion and, thereafter, uses the oriented bunch of photos for dense point cloud estimation by dense image matching algorithms. The result is transferred back to the mobile device for visualization and ad-hoc on-screen measurements.

  5. ERP system for 3D printing industry

    Directory of Open Access Journals (Sweden)

    Deaky Bogdan

    2017-01-01

    Full Text Available GOCREATE is an original cloud-based production management and optimization service which helps 3D printing service providers to use their resources better. The proposed Enterprise Resource Planning system can significantly increase income through improved productivity. With GOCREATE, the 3D printing service providers get a much higher production efficiency at a much lower licensing cost, to increase their competitiveness in the fast growing 3D printing market.

  6. Integration of real-time 3D image acquisition and multiview 3D display

    Science.gov (United States)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Li, Wei; Wang, Jingyi; Liu, Yongchun

    2014-03-01

    Seamless integration of 3D acquisition and 3D display systems offers enhanced experience in 3D visualization of the real world objects or scenes. The vivid representation of captured 3D objects displayed on a glasses-free 3D display screen could bring the realistic viewing experience to viewers as if they are viewing real-world scene. Although the technologies in 3D acquisition and 3D display have advanced rapidly in recent years, effort is lacking in studying the seamless integration of these two different aspects of 3D technologies. In this paper, we describe our recent progress on integrating a light-field 3D acquisition system and an autostereoscopic multiview 3D display for real-time light field capture and display. This paper focuses on both the architecture design and the implementation of the hardware and the software of this integrated 3D system. A prototype of the integrated 3D system is built to demonstrate the real-time 3D acquisition and 3D display capability of our proposed system.

  7. Imaging a Sustainable Future in 3D

    Science.gov (United States)

    Schuhr, W.; Lee, J. D.; Kanngieser, E.

    2012-07-01

    It is the intention of this paper, to contribute to a sustainable future by providing objective object information based on 3D photography as well as promoting 3D photography not only for scientists, but also for amateurs. Due to the presentation of this article by CIPA Task Group 3 on "3D Photographs in Cultural Heritage", the presented samples are masterpieces of historic as well as of current 3D photography concentrating on cultural heritage. In addition to a report on exemplarily access to international archives of 3D photographs, samples for new 3D photographs taken with modern 3D cameras, as well as by means of a ground based high resolution XLITE staff camera and also 3D photographs taken from a captive balloon and the use of civil drone platforms are dealt with. To advise on optimum suited 3D methodology, as well as to catch new trends in 3D, an updated synoptic overview of the 3D visualization technology, even claiming completeness, has been carried out as a result of a systematic survey. In this respect, e.g., today's lasered crystals might be "early bird" products in 3D, which, due to lack in resolution, contrast and color, remember to the stage of the invention of photography.

  8. Developing Customized Dental Miniscrew Surgical Template from Thermoplastic Polymer Material Using Image Superimposition, CAD System, and 3D Printing

    Directory of Open Access Journals (Sweden)

    Yu-Tzu Wang

    2017-01-01

    Full Text Available This study integrates cone-beam computed tomography (CBCT/laser scan image superposition, computer-aided design (CAD, and 3D printing (3DP to develop a technology for producing customized dental (orthodontic miniscrew surgical templates using polymer material. Maxillary bone solid models with the bone and teeth reconstructed using CBCT images and teeth and mucosa outer profile acquired using laser scanning were superimposed to allow miniscrew visual insertion planning and permit surgical template fabrication. The customized surgical template CAD model was fabricated offset based on the teeth/mucosa/bracket contour profiles in the superimposition model and exported to duplicate the plastic template using the 3DP technique and polymer material. An anterior retraction and intrusion clinical test for the maxillary canines/incisors showed that two miniscrews were placed safely and did not produce inflammation or other discomfort symptoms one week after surgery. The fitness between the mucosa and template indicated that the average gap sizes were found smaller than 0.5 mm and confirmed that the surgical template presented good holding power and well-fitting adaption. This study addressed integrating CBCT and laser scan image superposition; CAD and 3DP techniques can be applied to fabricate an accurate customized surgical template for dental orthodontic miniscrews.

  9. Developing Customized Dental Miniscrew Surgical Template from Thermoplastic Polymer Material Using Image Superimposition, CAD System, and 3D Printing

    Science.gov (United States)

    Yu, Jian-Hong; Lo, Lun-Jou; Hsu, Pin-Hsin

    2017-01-01

    This study integrates cone-beam computed tomography (CBCT)/laser scan image superposition, computer-aided design (CAD), and 3D printing (3DP) to develop a technology for producing customized dental (orthodontic) miniscrew surgical templates using polymer material. Maxillary bone solid models with the bone and teeth reconstructed using CBCT images and teeth and mucosa outer profile acquired using laser scanning were superimposed to allow miniscrew visual insertion planning and permit surgical template fabrication. The customized surgical template CAD model was fabricated offset based on the teeth/mucosa/bracket contour profiles in the superimposition model and exported to duplicate the plastic template using the 3DP technique and polymer material. An anterior retraction and intrusion clinical test for the maxillary canines/incisors showed that two miniscrews were placed safely and did not produce inflammation or other discomfort symptoms one week after surgery. The fitness between the mucosa and template indicated that the average gap sizes were found smaller than 0.5 mm and confirmed that the surgical template presented good holding power and well-fitting adaption. This study addressed integrating CBCT and laser scan image superposition; CAD and 3DP techniques can be applied to fabricate an accurate customized surgical template for dental orthodontic miniscrews. PMID:28280726

  10. DATA PROCESSING TECHNOLOGY OF AIRBORNE 3D IMAGE

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Airborne 3D image which integrates GPS,attitude measurement unit (AMU),sca nning laser rangefinder (SLR) and spectral scanner has been developed successful ly.The spectral scanner and SLR use the same optical system which ensures laser point to match pixel seamlessly.The distinctive advantage of 3D image is that it can produce geo_referenced images and DSM (digital surface models) images wi thout any ground control points (GCPs).It is no longer necessary to sur vey GCPs and with some softwares the data can be processed and produce digital s urface models (DSM) and geo_referenced images in quasi_real_time,therefore,the efficiency of 3 D image is 10~100 times higher than that of traditional approaches.The process ing procedure involves decomposing and checking the raw data,processing GPS dat a,calculating the positions of laser sample points,producing geo_referenced im age,producing DSM and mosaicing strips.  The principle of 3D image is first introduced in this paper,and then we focus on the fast processing technique and algorithm.The flight tests and processed r esults show that the processing technique is feasible and can meet the requireme nt of quasi_real_time applications.

  11. Handbook of 3D machine vision optical metrology and imaging

    CERN Document Server

    Zhang, Song

    2013-01-01

    With the ongoing release of 3D movies and the emergence of 3D TVs, 3D imaging technologies have penetrated our daily lives. Yet choosing from the numerous 3D vision methods available can be frustrating for scientists and engineers, especially without a comprehensive resource to consult. Filling this gap, Handbook of 3D Machine Vision: Optical Metrology and Imaging gives an extensive, in-depth look at the most popular 3D imaging techniques. It focuses on noninvasive, noncontact optical methods (optical metrology and imaging). The handbook begins with the well-studied method of stereo vision and

  12. Interactive visualization of multiresolution image stacks in 3D.

    Science.gov (United States)

    Trotts, Issac; Mikula, Shawn; Jones, Edward G

    2007-04-15

    Conventional microscopy, electron microscopy, and imaging techniques such as MRI and PET commonly generate large stacks of images of the sectioned brain. In other domains, such as neurophysiology, variables such as space or time are also varied along a stack axis. Digital image sizes have been progressively increasing and in virtual microscopy, it is now common to work with individual image sizes that are several hundred megapixels and several gigabytes in size. The interactive visualization of these high-resolution, multiresolution images in 2D has been addressed previously [Sullivan, G., and Baker, R., 1994. Efficient quad-tree coding of images and video. IEEE Trans. Image Process. 3 (3), 327-331]. Here, we describe a method for interactive visualization of multiresolution image stacks in 3D. The method, characterized as quad-tree based multiresolution image stack interactive visualization using a texel projection based criterion, relies on accessing and projecting image tiles from multiresolution image stacks in such a way that, from the observer's perspective, image tiles all appear approximately the same size even though they are accessed from different tiers within the images comprising the stack. This method enables efficient navigation of high-resolution image stacks. We implement this method in a program called StackVis, which is a Windows-based, interactive 3D multiresolution image stack visualization system written in C++ and using OpenGL. It is freely available at http://brainmaps.org.

  13. Accuracy of x-ray image-based 3D localization from two C-arm views: a comparison between an ideal system and a real device

    Science.gov (United States)

    Brost, Alexander; Strobel, Norbert; Yatziv, Liron; Gilson, Wesley; Meyer, Bernhard; Hornegger, Joachim; Lewin, Jonathan; Wacker, Frank

    2009-02-01

    arm X-ray imaging devices are commonly used for minimally invasive cardiovascular or other interventional procedures. Calibrated state-of-the-art systems can, however, not only be used for 2D imaging but also for three-dimensional reconstruction either using tomographic techniques or even stereotactic approaches. To evaluate the accuracy of X-ray object localization from two views, a simulation study assuming an ideal imaging geometry was carried out first. This was backed up with a phantom experiment involving a real C-arm angiography system. Both studies were based on a phantom comprising five point objects. These point objects were projected onto a flat-panel detector under different C-arm view positions. The resulting 2D positions were perturbed by adding Gaussian noise to simulate 2D point localization errors. In the next step, 3D point positions were triangulated from two views. A 3D error was computed by taking differences between the reconstructed 3D positions using the perturbed 2D positions and the initial 3D positions of the five points. This experiment was repeated for various C-arm angulations involving angular differences ranging from 15° to 165°. The smallest 3D reconstruction error was achieved, as expected, by views that were 90° degrees apart. In this case, the simulation study yielded a 3D error of 0.82 mm +/- 0.24 mm (mean +/- standard deviation) for 2D noise with a standard deviation of 1.232 mm (4 detector pixels). The experimental result for this view configuration obtained on an AXIOM Artis C-arm (Siemens AG, Healthcare Sector, Forchheim, Germany) system was 0.98 mm +/- 0.29 mm, respectively. These results show that state-of-the-art C-arm systems can localize instruments with millimeter accuracy, and that they can accomplish this almost as well as an idealized theoretical counterpart. High stereotactic localization accuracy, good patient access, and CT-like 3D imaging capabilities render state-of-the-art C-arm systems ideal devices for X

  14. Procedure for making mannequins tailor for image quality control of PET by 3D printing systems; Procedimiento para la fabricacion de maniquies a medida, para control de calidad de imagen PET, mediante sistemas de impresion 3D

    Energy Technology Data Exchange (ETDEWEB)

    Collado Chamorro, P. M.; Saez Beltran, F.; Diaz Pascual, V.; Benito Bejarado, M. A.; Sanz Freire, C. J.; Lopo Casqueiro, N.; Gonzalez Fernandez, M. P.; Lopez de Gamarra, M. S.

    2015-07-01

    There is a software free both for be the processes of modeling of the objects 3D to split of medical images, as for convert said modeling to file ready for be read and executed by the 3D printers (sequence or slicer). This lets make mannequins of Control of quality with a investment minimum. In this work is built a mannequin of brain refillable to measurement for be used in studies PET. (Author)

  15. Autonomous Planetary 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    is discussed.Based on such features, 3-D representations may be compiled from two or more 2-D satellite images. The main purposes of such a mapping system are extraction of landing sites, objects of scientific interest and general planetary surveying. All data processing is performed autonomously onboard...

  16. DISOCCLUSION OF 3D LIDAR POINT CLOUDS USING RANGE IMAGES

    Directory of Open Access Journals (Sweden)

    P. Biasutti

    2017-05-01

    Full Text Available This paper proposes a novel framework for the disocclusion of mobile objects in 3D LiDAR scenes aquired via street-based Mobile Mapping Systems (MMS. Most of the existing lines of research tackle this problem directly in the 3D space. This work promotes an alternative approach by using a 2D range image representation of the 3D point cloud, taking advantage of the fact that the problem of disocclusion has been intensively studied in the 2D image processing community over the past decade. First, the point cloud is turned into a 2D range image by exploiting the sensor’s topology. Using the range image, a semi-automatic segmentation procedure based on depth histograms is performed in order to select the occluding object to be removed. A variational image inpainting technique is then used to reconstruct the area occluded by that object. Finally, the range image is unprojected as a 3D point cloud. Experiments on real data prove the effectiveness of this procedure both in terms of accuracy and speed.

  17. Disocclusion of 3d LIDAR Point Clouds Using Range Images

    Science.gov (United States)

    Biasutti, P.; Aujol, J.-F.; Brédif, M.; Bugeau, A.

    2017-05-01

    This paper proposes a novel framework for the disocclusion of mobile objects in 3D LiDAR scenes aquired via street-based Mobile Mapping Systems (MMS). Most of the existing lines of research tackle this problem directly in the 3D space. This work promotes an alternative approach by using a 2D range image representation of the 3D point cloud, taking advantage of the fact that the problem of disocclusion has been intensively studied in the 2D image processing community over the past decade. First, the point cloud is turned into a 2D range image by exploiting the sensor's topology. Using the range image, a semi-automatic segmentation procedure based on depth histograms is performed in order to select the occluding object to be removed. A variational image inpainting technique is then used to reconstruct the area occluded by that object. Finally, the range image is unprojected as a 3D point cloud. Experiments on real data prove the effectiveness of this procedure both in terms of accuracy and speed.

  18. Automated curved planar reformation of 3D spine images

    Energy Technology Data Exchange (ETDEWEB)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo [University of Ljubljana, Faculty of Electrical Engineering, Trzaska 25, SI-1000 Ljubljana (Slovenia)

    2005-10-07

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks.

  19. 3D ultrasound imaging in image-guided intervention.

    Science.gov (United States)

    Fenster, Aaron; Bax, Jeff; Neshat, Hamid; Cool, Derek; Kakani, Nirmal; Romagnoli, Cesare

    2014-01-01

    Ultrasound imaging is used extensively in diagnosis and image-guidance for interventions of human diseases. However, conventional 2D ultrasound suffers from limitations since it can only provide 2D images of 3-dimensional structures in the body. Thus, measurement of organ size is variable, and guidance of interventions is limited, as the physician is required to mentally reconstruct the 3-dimensional anatomy using 2D views. Over the past 20 years, a number of 3-dimensional ultrasound imaging approaches have been developed. We have developed an approach that is based on a mechanical mechanism to move any conventional ultrasound transducer while 2D images are collected rapidly and reconstructed into a 3D image. In this presentation, 3D ultrasound imaging approaches will be described for use in image-guided interventions.

  20. Imaging fault zones using 3D seismic image processing techniques

    Science.gov (United States)

    Iacopini, David; Butler, Rob; Purves, Steve

    2013-04-01

    Significant advances in structural analysis of deep water structure, salt tectonic and extensional rift basin come from the descriptions of fault system geometries imaged in 3D seismic data. However, even where seismic data are excellent, in most cases the trajectory of thrust faults is highly conjectural and still significant uncertainty exists as to the patterns of deformation that develop between the main faults segments, and even of the fault architectures themselves. Moreover structural interpretations that conventionally define faults by breaks and apparent offsets of seismic reflectors are commonly conditioned by a narrow range of theoretical models of fault behavior. For example, almost all interpretations of thrust geometries on seismic data rely on theoretical "end-member" behaviors where concepts as strain localization or multilayer mechanics are simply avoided. Yet analogue outcrop studies confirm that such descriptions are commonly unsatisfactory and incomplete. In order to fill these gaps and improve the 3D visualization of deformation in the subsurface, seismic attribute methods are developed here in conjunction with conventional mapping of reflector amplitudes (Marfurt & Chopra, 2007)). These signal processing techniques recently developed and applied especially by the oil industry use variations in the amplitude and phase of the seismic wavelet. These seismic attributes improve the signal interpretation and are calculated and applied to the entire 3D seismic dataset. In this contribution we will show 3D seismic examples of fault structures from gravity-driven deep-water thrust structures and extensional basin systems to indicate how 3D seismic image processing methods can not only build better the geometrical interpretations of the faults but also begin to map both strain and damage through amplitude/phase properties of the seismic signal. This is done by quantifying and delineating the short-range anomalies on the intensity of reflector amplitudes

  1. 3D/2D Registration of medical images

    OpenAIRE

    Tomaževič, D.

    2008-01-01

    The topic of this doctoral dissertation is registration of 3D medical images to corresponding projective 2D images, referred to as 3D/2D registration. There are numerous possible applications of 3D/2D registration in image-aided diagnosis and treatment. In most of the applications, 3D/2D registration provides the location and orientation of the structures in a preoperative 3D CT or MR image with respect to intraoperative 2D X-ray images. The proposed doctoral dissertation tries to find origin...

  2. A 3D digital medical photography system in paediatric medicine.

    Science.gov (United States)

    Williams, Susanne K; Ellis, Lloyd A; Williams, Gigi

    2008-01-01

    In 2004, traditional clinical photography services at the Educational Resource Centre were extended using new technology. This paper describes the establishment of a 3D digital imaging system in a paediatric setting at the Royal Children's Hospital, Melbourne.

  3. Networked 3D Virtual Museum System

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    Virtual heritage has become increasingly important in the conservation, preservation, and interpretation of our cultural and natural history. Moreover, rapid advances in digital technologies in recent years offer virtual heritage new direction. This paper introduces our approach toward a networked 3D virtual museum system, especially, how to model, manage, present virtual heritages and furthermore how to use computer network for the share of virtual heritage in the networked virtual environment. This paper first addresses a 3D acquisition and processing technique for virtual heritage modeling and shows some illustrative examples. Then, this paper describes a management of virtual heritage assets that are composed by various rich media. This paper introduces our schemes to present the virtual heritages, which include 3D virtual heritage browser system, CAVE system, and immersive VR theater. Finally, this paper presents the new direction of networked 3D virtual museum of which main idea is remote guide of the virtual heritage using the mixed reality technique.

  4. 3D packaging for integrated circuit systems

    Energy Technology Data Exchange (ETDEWEB)

    Chu, D.; Palmer, D.W. [eds.

    1996-11-01

    A goal was set for high density, high performance microelectronics pursued through a dense 3D packing of integrated circuits. A {open_quotes}tool set{close_quotes} of assembly processes have been developed that enable 3D system designs: 3D thermal analysis, silicon electrical through vias, IC thinning, mounting wells in silicon, adhesives for silicon stacking, pretesting of IC chips before commitment to stacks, and bond pad bumping. Validation of these process developments occurred through both Sandia prototypes and subsequent commercial examples.

  5. Super deep 3D images from a 3D omnifocus video camera.

    Science.gov (United States)

    Iizuka, Keigo

    2012-02-20

    When using stereographic image pairs to create three-dimensional (3D) images, a deep depth of field in the original scene enhances the depth perception in the 3D image. The omnifocus video camera has no depth of field limitations and produces images that are in focus throughout. By installing an attachment on the omnifocus video camera, real-time super deep stereoscopic pairs of video images were obtained. The deeper depth of field creates a larger perspective image shift, which makes greater demands on the binocular fusion of human vision. A means of reducing the perspective shift without harming the depth of field was found.

  6. Large distance 3D imaging of hidden objects

    Science.gov (United States)

    Rozban, Daniel; Aharon Akram, Avihai; Kopeika, N. S.; Abramovich, A.; Levanon, Assaf

    2014-06-01

    Imaging systems in millimeter waves are required for applications in medicine, communications, homeland security, and space technology. This is because there is no known ionization hazard for biological tissue, and atmospheric attenuation in this range of the spectrum is low compared to that of infrared and optical rays. The lack of an inexpensive room temperature detector makes it difficult to give a suitable real time implement for the above applications. A 3D MMW imaging system based on chirp radar was studied previously using a scanning imaging system of a single detector. The system presented here proposes to employ a chirp radar method with Glow Discharge Detector (GDD) Focal Plane Array (FPA of plasma based detectors) using heterodyne detection. The intensity at each pixel in the GDD FPA yields the usual 2D image. The value of the I-F frequency yields the range information at each pixel. This will enable 3D MMW imaging. In this work we experimentally demonstrate the feasibility of implementing an imaging system based on radar principles and FPA of inexpensive detectors. This imaging system is shown to be capable of imaging objects from distances of at least 10 meters.

  7. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Ferin, Guillaume

    2012-01-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32x32 element prototype transducer. The transducer mimicked is a dense matrix...

  8. Optimizing and Evaluating an Integrated SPECT-CmT System Dedicated to Improved 3-D Breast Cancer Imaging

    Science.gov (United States)

    2009-05-01

    the imaging system’s required clinica l performance. This evidence ranged from the ability of the CmT system to image close to the chest wall (see...year old woman undergoing dual-view screening mammograph y of her remaining int act breast seven years after a mastectomy) to completing a medica ...Telluride (CZT) gamma camera (model LumaGEM 3200S, Gamma Medica , Inc., Northridge, CA) with discretized crystals, each 2.3x2.3x5mm3 on a 2.5mm

  9. Optimizing and Evaluating an Integrated SPECT-CmT System Dedicated to Improved 3-D Breast Cancer Imaging

    Science.gov (United States)

    2010-05-01

    M. P. Tornai, "Pilot Patient Studies Using a Dedicated Dual-Modality SPECT-CT System for Breast Imaging " 2008 AAPM (2008). 3M. J. Butson, P. K. N...for Breast Imaging " in 2008 AAPM , (Houston TX, 2008). 16. M. P. Tornai, R. L. McKinley, C. N. Brzymialkiewicz, P. Madhav, S. J. Cutler, D. J...S. Meigooni, R. Nath, J. E. Rodgers and C. G. Soares, "Radiochromic film dosimetry: recommendations of AAPM Radiation Therapy Committee Task Group

  10. Deformable Surface 3D Reconstruction from Monocular Images

    CERN Document Server

    Salzmann, Matthieu

    2010-01-01

    Being able to recover the shape of 3D deformable surfaces from a single video stream would make it possible to field reconstruction systems that run on widely available hardware without requiring specialized devices. However, because many different 3D shapes can have virtually the same projection, such monocular shape recovery is inherently ambiguous. In this survey, we will review the two main classes of techniques that have proved most effective so far: The template-based methods that rely on establishing correspondences with a reference image in which the shape is already known, and non-rig

  11. 3D Image Synthesis for B—Reps Objects

    Institute of Scientific and Technical Information of China (English)

    黄正东; 彭群生; 等

    1991-01-01

    This paper presents a new algorithm for generating 3D images of B-reps objects with trimmed surface boundaries.The 3D image is a discrete voxel-map representation within a Cubic Frame Buffer (CFB).The definition of 3D images for curve,surface and solid object are introduced which imply the connectivity and fidelity requirements.Adaptive Forward Differencing matrix (AFD-matrix) for 1D-3D manifolds in 3D space is developed.By setting rules to update the AFD-matrix,the forward difference direction and stepwise can be adjusted.Finally,an efficient algorithm is presented based on the AFD-matrix concept for converting the object in 3D space to 3D image in 3D discrete space.

  12. IMAGE SELECTION FOR 3D MEASUREMENT BASED ON NETWORK DESIGN

    Directory of Open Access Journals (Sweden)

    T. Fuse

    2015-05-01

    Full Text Available 3D models have been widely used by spread of many available free-software. On the other hand, enormous images can be easily acquired, and images are utilized for creating the 3D models recently. However, the creation of 3D models by using huge amount of images takes a lot of time and effort, and then efficiency for 3D measurement are required. In the efficiency strategy, the accuracy of the measurement is also required. This paper develops an image selection method based on network design that means surveying network construction. The proposed method uses image connectivity graph. By this, the image selection problem is regarded as combinatorial optimization problem and the graph cuts technique can be applied. Additionally, in the process of 3D reconstruction, low quality images and similarity images are extracted and removed. Through the experiments, the significance of the proposed method is confirmed. Potential to efficient and accurate 3D measurement is implied.

  13. SU-E-J-13: Six Degree of Freedom Image Fusion Accuracy for Cranial Target Localization On the Varian Edge Stereotactic Radiosurgery System: Comparison Between 2D/3D and KV CBCT Image Registration

    Energy Technology Data Exchange (ETDEWEB)

    Xu, H [Wayne State University, Detroit, MI (United States); Song, K; Chetty, I; Kim, J [Henry Ford Health System, Detroit, MI (United States); Wen, N [Henry Ford Health System, West Bloomfield, MI (United States)

    2015-06-15

    Purpose: To determine the 6 degree of freedom systematic deviations between 2D/3D and CBCT image registration with various imaging setups and fusion algorithms on the Varian Edge Linac. Methods: An anthropomorphic head phantom with radio opaque targets embedded was scanned with CT slice thicknesses of 0.8, 1, 2, and 3mm. The 6 DOF systematic errors were assessed by comparing 2D/3D (kV/MV with CT) with 3D/3D (CBCT with CT) image registrations with different offset positions, similarity measures, image filters, and CBCT slice thicknesses (1 and 2 mm). The 2D/3D registration accuracy of 51 fractions for 26 cranial SRS patients was also evaluated by analyzing 2D/3D pre-treatment verification taken after 3D/3D image registrations. Results: The systematic deviations of 2D/3D image registration using kV- kV, MV-kV and MV-MV image pairs were within ±0.3mm and ±0.3° for translations and rotations with 95% confidence interval (CI) for a reference CT with 0.8 mm slice thickness. No significant difference (P>0.05) on target localization was observed between 0.8mm, 1mm, and 2mm CT slice thicknesses with CBCT slice thicknesses of 1mm and 2mm. With 3mm CT slice thickness, both 2D/3D and 3D/3D registrations performed less accurately in longitudinal direction than thinner CT slice thickness (0.60±0.12mm and 0.63±0.07mm off, respectively). Using content filter and using similarity measure of pattern intensity instead of mutual information, improved the 2D/3D registration accuracy significantly (P=0.02 and P=0.01, respectively). For the patient study, means and standard deviations of residual errors were 0.09±0.32mm, −0.22±0.51mm and −0.07±0.32mm in VRT, LNG and LAT directions, respectively, and 0.12°±0.46°, −0.12°±0.39° and 0.06°±0.28° in RTN, PITCH, and ROLL directions, respectively. 95% CI of translational and rotational deviations were comparable to those in phantom study. Conclusion: 2D/3D image registration provided on the Varian Edge radiosurgery, 6 DOF

  14. Automatic 2D-to-3D image conversion using 3D examples from the internet

    Science.gov (United States)

    Konrad, J.; Brown, G.; Wang, M.; Ishwar, P.; Wu, C.; Mukherjee, D.

    2012-03-01

    The availability of 3D hardware has so far outpaced the production of 3D content. Although to date many methods have been proposed to convert 2D images to 3D stereopairs, the most successful ones involve human operators and, therefore, are time-consuming and costly, while the fully-automatic ones have not yet achieved the same level of quality. This subpar performance is due to the fact that automatic methods usually rely on assumptions about the captured 3D scene that are often violated in practice. In this paper, we explore a radically different approach inspired by our work on saliency detection in images. Instead of relying on a deterministic scene model for the input 2D image, we propose to "learn" the model from a large dictionary of stereopairs, such as YouTube 3D. Our new approach is built upon a key observation and an assumption. The key observation is that among millions of stereopairs available on-line, there likely exist many stereopairs whose 3D content matches that of the 2D input (query). We assume that two stereopairs whose left images are photometrically similar are likely to have similar disparity fields. Our approach first finds a number of on-line stereopairs whose left image is a close photometric match to the 2D query and then extracts depth information from these stereopairs. Since disparities for the selected stereopairs differ due to differences in underlying image content, level of noise, distortions, etc., we combine them by using the median. We apply the resulting median disparity field to the 2D query to obtain the corresponding right image, while handling occlusions and newly-exposed areas in the usual way. We have applied our method in two scenarios. First, we used YouTube 3D videos in search of the most similar frames. Then, we repeated the experiments on a small, but carefully-selected, dictionary of stereopairs closely matching the query. This, to a degree, emulates the results one would expect from the use of an extremely large 3D

  15. Toroidal mode number estimation of the edge-localized modes using the KSTAR 3-D electron cyclotron emission imaging system

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J.; Yun, G. S., E-mail: gunsu@postech.ac.kr; Lee, J. E.; Kim, M.; Choi, M. J.; Lee, W. [Pohang University of Science and Technology, Pohang 790-784 (Korea, Republic of); Park, H. K. [Ulsan National Institute of Science and Technology, Ulsan 689-798 (Korea, Republic of); Domier, C. W.; Luhmann, N. C. [University of California at Davis, Davis, California 95616 (United States); Sabbagh, S. A.; Park, Y. S. [Columbia University, New York, New York 10027 (United States); Lee, S. G.; Bak, J. G. [National Fusion Research Institute, Daejeon 305-333 (Korea, Republic of)

    2014-06-15

    A new and more accurate technique is presented for determining the toroidal mode number n of edge-localized modes (ELMs) using two independent electron cyclotron emission imaging (ECEI) systems in the Korea Superconducting Tokamak Advanced Research (KSTAR) device. The technique involves the measurement of the poloidal spacing between adjacent ELM filaments, and of the pitch angle α{sub *} of filaments at the plasma outboard midplane. Equilibrium reconstruction verifies that α{sub *} is nearly constant and thus well-defined at the midplane edge. Estimates of n obtained using two ECEI systems agree well with n measured by the conventional technique employing an array of Mirnov coils.

  16. The occlusion-adjusted prefabricated 3D mirror image templates by computer simulation: the image-guided navigation system application in difficult cases of head and neck reconstruction.

    Science.gov (United States)

    Cheng, Hsu-Tang; Wu, Chao-I; Tseng, Ching-Shiow; Chen, Hung-Chi; Lee, Wu-Song; Chen, Philip Kuo-Ting; Chang, Sophia Chia-Ning

    2009-11-01

    Computer applications in head and neck reconstruction are rapidly emerging and create not only a virtual environment for presurgical planning, but also help in image-guided navigational surgery. This study evaluates the use of prefabricated 3-dimensional (3D) mirror image templates made by computer-simulated adjusted occlusions to assist in microvascular prefabricated flap insertion during reconstructive surgery. Five patients underwent tumor ablation surgery in 1999 and survived for 8 years. Four of the patients with malignancy received radiation therapy. All patients in this study suffered from severe malocclusion causing trismus, headache, temporomandibular joint pain, an unsymmetrical face, and the inability of further osseointegrated teeth insertion. They underwent a 3D computer tomography examination and the nonprocessed raw data were sent for computer simulation in adjusting occlusion; thus, a mirror image template could be fabricated for microsurgical flap guidance. The computer simulated occlusion was acceptable and facial symmetry obtained. The use of the template resulted in a shorter operation time and recovery was as expected. The computer-simulated occlusion-adjusted 3D mirror image templates aid in the use of free vascularized bone flaps for restoring continuity to the mandible. The coordinated arch will help with further osseointegration teeth insertion.

  17. Tipping solutions: emerging 3D nano-fabrication/ -imaging technologies

    Science.gov (United States)

    Seniutinas, Gediminas; Balčytis, Armandas; Reklaitis, Ignas; Chen, Feng; Davis, Jeffrey; David, Christian; Juodkazis, Saulius

    2017-06-01

    The evolution of optical microscopy from an imaging technique into a tool for materials modification and fabrication is now being repeated with other characterization techniques, including scanning electron microscopy (SEM), focused ion beam (FIB) milling/imaging, and atomic force microscopy (AFM). Fabrication and in situ imaging of materials undergoing a three-dimensional (3D) nano-structuring within a 1-100 nm resolution window is required for future manufacturing of devices. This level of precision is critically in enabling the cross-over between different device platforms (e.g. from electronics to micro-/nano-fluidics and/or photonics) within future devices that will be interfacing with biological and molecular systems in a 3D fashion. Prospective trends in electron, ion, and nano-tip based fabrication techniques are presented.

  18. Tipping solutions: emerging 3D nano-fabrication/ -imaging technologies

    Directory of Open Access Journals (Sweden)

    Seniutinas Gediminas

    2017-06-01

    Full Text Available The evolution of optical microscopy from an imaging technique into a tool for materials modification and fabrication is now being repeated with other characterization techniques, including scanning electron microscopy (SEM, focused ion beam (FIB milling/imaging, and atomic force microscopy (AFM. Fabrication and in situ imaging of materials undergoing a three-dimensional (3D nano-structuring within a 1−100 nm resolution window is required for future manufacturing of devices. This level of precision is critically in enabling the cross-over between different device platforms (e.g. from electronics to micro-/nano-fluidics and/or photonics within future devices that will be interfacing with biological and molecular systems in a 3D fashion. Prospective trends in electron, ion, and nano-tip based fabrication techniques are presented.

  19. 3-D template simulation system in Total Hip Arthroplasty

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, Nobuhiko [Nagoya City Univ. (Japan). Medical School

    2000-09-01

    In Total Hip Arthroplastry, 2D template on Plain X-ray is usually used for preoperative planning. But deformity and contracture can cause malposition and measurement error. To reduce those problems, a 3D preoperative simulation system was developed. Three methods were compared in this study. One is to create very accurate AP and ML images which can use for standard 2D template. One is fully 3D preoperative template system using computer graphics. Last one is substantial simulation using stereo-lithography model. 3D geometry data of the bone was made from Helical 3-D CT data. AP and ML surface cutting 3D images of the femur were created using workstation (Advantage Workstation; GE Medical Systems). The extracted 3D geometry was displayed on personal computer using Magics (STL data visualization software), then 3D geometry of the stem was superimposed in it. The full 3D simulation system made it possible to observe the bone and stem geometry from any direction and by any section view. Stereo-lithography model was useful for detailed observation of the femur anatomy. (author)

  20. [A new 2D and 3D imaging approach to musculoskeletal physiology and pathology with low-dose radiation and the standing position: the EOS system].

    Science.gov (United States)

    Dubousset, Jean; Charpak, Georges; Dorion, Irène; Skalli, Wafa; Lavaste, François; Deguise, Jacques; Kalifa, Gabriel; Ferey, Solène

    2005-02-01

    Close collaboration between multidisciplinary specialists (physicists, biomecanical engineers, medical radiologists and pediatric orthopedic surgeons) has led to the development of a new low-dose radiation device named EOS. EOS has three main advantages: The use of a gaseous X-ray detector, invented by Georges Charpak (Nobel Prizewinner 1992), the dose necessary to obtain a 2D image of the skeletal system has been reduced by 8 to 10 times, while that required to obtain a 3D reconstruction from CT slices has fallen by a factor of 800 to 1000. The accuracy of the 3D reconstruction obtained with EOS is as good as that obtained with CT. The patient is examined in the standing (or seated) position, and is scanned simultaneously from head to feet, both frontally and laterally. This is a major advantage over conventional CT which requires the patient to be placed horizontally. -The 3D reconstructions of each element of the osteo-articular system are as precise as those obtained by conventional CT. EOS is also rapid, taking only 15 to 30 minutes to image the entire spine.

  1. Volumetric 3D Display System with Static Screen

    Science.gov (United States)

    Geng, Jason

    2011-01-01

    Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous

  2. Dynamic 3D computed tomography scanner for vascular imaging

    Science.gov (United States)

    Lee, Mark K.; Holdsworth, David W.; Fenster, Aaron

    2000-04-01

    A 3D dynamic computed-tomography (CT) scanner was developed for imaging objects undergoing periodic motion. The scanner system has high spatial and sufficient temporal resolution to produce quantitative tomographic/volume images of objects such as excised arterial samples perfused under physiological pressure conditions and enables the measurements of the local dynamic elastic modulus (Edyn) of the arteries in the axial and longitudinal directions. The system was comprised of a high resolution modified x-ray image intensifier (XRII) based computed tomographic system and a computer-controlled cardiac flow simulator. A standard NTSC CCD camera with a macro lens was coupled to the electro-optically zoomed XRII to acquire dynamic volumetric images. Through prospective cardiac gating and computer synchronized control, a time-resolved sequence of 20 mm thick high resolution volume images of porcine aortic specimens during one simulated cardiac cycle were obtained. Performance evaluation of the scanners illustrated that tomographic images can be obtained with resolution as high as 3.2 mm-1 with only a 9% decrease in the resolution for objects moving at velocities of 1 cm/s in 2D mode and static spatial resolution of 3.55 mm-1 with only a 14% decrease in the resolution in 3D mode for objects moving at a velocity of 10 cm/s. Application of the system for imaging of intact excised arterial specimens under simulated physiological flow/pressure conditions enabled measurements of the Edyn of the arteries with a precision of +/- kPa for the 3D scanner. Evaluation of the Edyn in the axial and longitudinal direction produced values of 428 +/- 35 kPa and 728 +/- 71 kPa, demonstrating the isotropic and homogeneous viscoelastic nature of the vascular specimens. These values obtained from the Dynamic CT systems were not statistically different (p less than 0.05) from the values obtained by standard uniaxial tensile testing and volumetric measurements.

  3. Dynamic dimension: system for simultaneous 3D and monoscopic viewing

    Science.gov (United States)

    Redert, Andre

    2004-05-01

    We propose the 'Dynamic Dimension' system that enables simultaneous viewing of 3D and monoscopic content on glasses-based stereo displays (e.g. CRT, Plasma, LCD). A viewer can choose to wear glasses and see content in 3D, or he may decide not to wear glasses, and see high-quality monoscopic content. The Dynamic Dimension system is based on simple image processing such as addition and subtraction. The input images can be captured by a triple camera setup or be rendered from so-called RGBD video, an ad-hoc standard for 3D video. From several subjective tests, we conclude that Dynamic Dimension produces a very much present and appealing 3D effect, while the monoscopic image quality remains high and totally unaffected.

  4. Detection of tibial condylar fractures using 3D imaging with a mobile image amplifier (Siemens ISO-C-3D): Comparison with plain films and spiral CT; Frakturdiagnostik am Kniegelenk mit einem neuen mobilen CT-System (ISO-C-3D): Vergleich mit konventionellem Roentgen und Spiral-CT

    Energy Technology Data Exchange (ETDEWEB)

    Kotsianos, D.; Rock, C.; Wirth, S.; Linsenmaier, U.; Brandl, R.; Fischer, T.; Pfeifer, K.J.; Reiser, M. [Klinikum der Universitaet Muenchen-Innenstadt, Muenchen (Germany). Inst. fuer Klinische Radiologie; Euler, E.; Mutschler, W. [Klinikum der Universitaet Muenchen-Innenstadt, Muenchen (Germany). Chirurgische Klinik und Poliklinik

    2002-01-01

    Purpose: To analyze a prototype mobile C-arm 3D image amplifier in the detection and classification of experimental tibial condylar fractures with multiplanar reconstructions (MPR). Method: Human knee specimens (n=22) with tibial condylar fractures were examined with a prototype C-arm (ISO-C-3D, Siemens AG), plain films (CR) and spiral CT (CT). The motorized C-arm provides fluoroscopic images during a 190 orbital rotation computing a 119 mm data cube. From these 3D data sets MP reconstructions were obtained. All images were evaluated by four independent readers for the detection and assessment of fracture lines. All fractures were classified according to the Mueller AO classification. To confirm the results, the specimens were finally surgically dissected. Results: 97% of the tibial condylar fractures were easily seen and correctly classified according to the Mueller AO classification on MP reconstruction of the ISO-C-3D. There is no significant difference between ISO-C and CT in detection and correct classification of fractures, but ISO-CD-3D is significant by better than CR. (orig.) [German] Zielsetzung: Ziel der vorliegenden Studie war es, die diagnostischen Moeglichkeiten und Grenzen der Erkennbarkeit und Klassifizierung von Frakturen mit multiplanaren Rekonstruktionen (MPR) aus 3D-Datensaetzen eines fahrbaren C-Bogengeraetes an Kniegelenken zu pruefen. Methodik: Kniegelenke von Verstorbenen (n=22) mit Tibiakopffrakturen wurden an einem Prototyp eines mobilen C-Bogen Schnittbild-/Durchleuchtungsgeraets (ISO-C-3D, Siemens AG Erlangen) untersucht. Das Geraet erzeugt waehrend einer einmaligen 190-Grad-Rotation 100 Projektionsaufnahmen, aus denen ein 3D-Volumendatensatz gewonnen wird. Aus diesem werden Hochkontrastschnittbilder als MP-Rekonstruktionen in allen drei Raumebenen errechnet und visualisiert. Die Kniegelenke wurden von 4 unabhaengigen Befundern hinsichtlich Frakturerkennbarkeit, Frakturart und -ausmass unter Verwendung der MP

  5. 3D image registration using a fast noniterative algorithm.

    Science.gov (United States)

    Zhilkin, P; Alexander, M E

    2000-11-01

    This note describes the implementation of a three-dimensional (3D) registration algorithm, generalizing a previous 2D version [Alexander, Int J Imaging Systems and Technology 1999;10:242-57]. The algorithm solves an integrated form of linearized image matching equation over a set of 3D rectangular sub-volumes ('patches') in the image domain. This integrated form avoids numerical instabilities due to differentiation of a noisy image over a lattice, and in addition renders the algorithm robustness to noise. Registration is implemented by first convolving the unregistered images with a set of computationally fast [O(N)] filters, providing four bandpass images for each input image, and integrating the image matching equation over the given patch. Each filter and each patch together provide an independent set of constraints on the displacement field derived by solving a set of linear regression equations. Furthermore, the filters are implemented at a variety of spatial scales, enabling registration parameters at one scale to be used as an input approximation for deriving refined values of those parameters at a finer scale of resolution. This hierarchical procedure is necessary to avoid false matches occurring. Both downsampled and oversampled (undecimating) filtering is implemented. Although the former is computationally fast, it lacks the translation invariance of the latter. Oversampling is required for accurate interpolation that is used in intermediate stages of the algorithm to reconstruct the partially registered from the unregistered image. However, downsampling is useful, and computationally efficient, for preliminary stages of registration when large mismatches are present. The 3D registration algorithm was implemented using a 12-parameter affine model for the displacement: u(x) = Ax + b. Linear interpolation was used throughout. Accuracy and timing results for registering various multislice images, obtained by scanning a melon and human volunteers in various

  6. Utilization of multiple frequencies in 3D nonlinear microwave imaging

    DEFF Research Database (Denmark)

    Jensen, Peter Damsgaard; Rubæk, Tonny; Mohr, Johan Jacob

    2012-01-01

    The use of multiple frequencies in a nonlinear microwave algorithm is considered. Using multiple frequencies allows for obtaining the improved resolution available at the higher frequencies while retaining the regularizing effects of the lower frequencies. However, a number of different challenges...... at lower frequencies are used as starting guesses for reconstructions at higher frequencies. The performance is illustrated using simulated 2-D data and data obtained with the 3-D DTU microwave imaging system....

  7. A Modular and Affordable Time-Lapse Imaging and Incubation System Based on 3D-Printed Parts, a Smartphone, and Off-The-Shelf Electronics.

    Science.gov (United States)

    Hernández Vera, Rodrigo; Schwan, Emil; Fatsis-Kavalopoulos, Nikos; Kreuger, Johan

    2016-01-01

    Time-lapse imaging is a powerful tool for studying cellular dynamics and cell behavior over long periods of time to acquire detailed functional information. However, commercially available time-lapse imaging systems are expensive and this has limited a broader implementation of this technique in low-resource environments. Further, the availability of time-lapse imaging systems often present workflow bottlenecks in well-funded institutions. To address these limitations we have designed a modular and affordable time-lapse imaging and incubation system (ATLIS). The ATLIS enables the transformation of simple inverted microscopes into live cell imaging systems using custom-designed 3D-printed parts, a smartphone, and off-the-shelf electronic components. We demonstrate that the ATLIS provides stable environmental conditions to support normal cell behavior during live imaging experiments in both traditional and evaporation-sensitive microfluidic cell culture systems. Thus, the system presented here has the potential to increase the accessibility of time-lapse microscopy of living cells for the wider research community.

  8. A Modular and Affordable Time-Lapse Imaging and Incubation System Based on 3D-Printed Parts, a Smartphone, and Off-The-Shelf Electronics

    Science.gov (United States)

    Schwan, Emil; Fatsis-Kavalopoulos, Nikos; Kreuger, Johan

    2016-01-01

    Time-lapse imaging is a powerful tool for studying cellular dynamics and cell behavior over long periods of time to acquire detailed functional information. However, commercially available time-lapse imaging systems are expensive and this has limited a broader implementation of this technique in low-resource environments. Further, the availability of time-lapse imaging systems often present workflow bottlenecks in well-funded institutions. To address these limitations we have designed a modular and affordable time-lapse imaging and incubation system (ATLIS). The ATLIS enables the transformation of simple inverted microscopes into live cell imaging systems using custom-designed 3D-printed parts, a smartphone, and off-the-shelf electronic components. We demonstrate that the ATLIS provides stable environmental conditions to support normal cell behavior during live imaging experiments in both traditional and evaporation-sensitive microfluidic cell culture systems. Thus, the system presented here has the potential to increase the accessibility of time-lapse microscopy of living cells for the wider research community. PMID:28002463

  9. 3D Imaging Millimeter Wave Circular Synthetic Aperture Radar

    Science.gov (United States)

    Zhang, Renyuan; Cao, Siyang

    2017-01-01

    In this paper, a new millimeter wave 3D imaging radar is proposed. The user just needs to move the radar along a circular track, and high resolution 3D imaging can be generated. The proposed radar uses the movement of itself to synthesize a large aperture in both the azimuth and elevation directions. It can utilize inverse Radon transform to resolve 3D imaging. To improve the sensing result, the compressed sensing approach is further investigated. The simulation and experimental result further illustrated the design. Because a single transceiver circuit is needed, a light, affordable and high resolution 3D mmWave imaging radar is illustrated in the paper. PMID:28629140

  10. Development and evaluation of a LOR-based image reconstruction with 3D system response modeling for a PET insert with dual-layer offset crystal design

    Science.gov (United States)

    Zhang, Xuezhu; Stortz, Greg; Sossi, Vesna; Thompson, Christopher J.; Retière, Fabrice; Kozlowski, Piotr; Thiessen, Jonathan D.; Goertzen, Andrew L.

    2013-12-01

    In this study we present a method of 3D system response calculation for analytical computer simulation and statistical image reconstruction for a magnetic resonance imaging (MRI) compatible positron emission tomography (PET) insert system that uses a dual-layer offset (DLO) crystal design. The general analytical system response functions (SRFs) for detector geometric and inter-crystal penetration of coincident crystal pairs are derived first. We implemented a 3D ray-tracing algorithm with 4π sampling for calculating the SRFs of coincident pairs of individual DLO crystals. The determination of which detector blocks are intersected by a gamma ray is made by calculating the intersection of the ray with virtual cylinders with radii just inside the inner surface and just outside the outer-edge of each crystal layer of the detector ring. For efficient ray-tracing computation, the detector block and ray to be traced are then rotated so that the crystals are aligned along the X-axis, facilitating calculation of ray/crystal boundary intersection points. This algorithm can be applied to any system geometry using either single-layer (SL) or multi-layer array design with or without offset crystals. For effective data organization, a direct lines of response (LOR)-based indexed histogram-mode method is also presented in this work. SRF calculation is performed on-the-fly in both forward and back projection procedures during each iteration of image reconstruction, with acceleration through use of eight-fold geometric symmetry and multi-threaded parallel computation. To validate the proposed methods, we performed a series of analytical and Monte Carlo computer simulations for different system geometry and detector designs. The full-width-at-half-maximum of the numerical SRFs in both radial and tangential directions are calculated and compared for various system designs. By inspecting the sinograms obtained for different detector geometries, it can be seen that the DLO crystal

  11. From medical imaging data to 3D printed anatomical models.

    Science.gov (United States)

    Bücking, Thore M; Hill, Emma R; Robertson, James L; Maneas, Efthymios; Plumb, Andrew A; Nikitichev, Daniil I

    2017-01-01

    Anatomical models are important training and teaching tools in the clinical environment and are routinely used in medical imaging research. Advances in segmentation algorithms and increased availability of three-dimensional (3D) printers have made it possible to create cost-efficient patient-specific models without expert knowledge. We introduce a general workflow that can be used to convert volumetric medical imaging data (as generated by Computer Tomography (CT)) to 3D printed physical models. This process is broken up into three steps: image segmentation, mesh refinement and 3D printing. To lower the barrier to entry and provide the best options when aiming to 3D print an anatomical model from medical images, we provide an overview of relevant free and open-source image segmentation tools as well as 3D printing technologies. We demonstrate the utility of this streamlined workflow by creating models of ribs, liver, and lung using a Fused Deposition Modelling 3D printer.

  12. Evaluation of Kinect 3D Sensor for Healthcare Imaging.

    Science.gov (United States)

    Pöhlmann, Stefanie T L; Harkness, Elaine F; Taylor, Christopher J; Astley, Susan M

    2016-01-01

    Microsoft Kinect is a three-dimensional (3D) sensor originally designed for gaming that has received growing interest as a cost-effective and safe device for healthcare imaging. Recent applications of Kinect in health monitoring, screening, rehabilitation, assistance systems, and intervention support are reviewed here. The suitability of available technologies for healthcare imaging applications is assessed. The performance of Kinect I, based on structured light technology, is compared with that of the more recent Kinect II, which uses time-of-flight measurement, under conditions relevant to healthcare applications. The accuracy, precision, and resolution of 3D images generated with Kinect I and Kinect II are evaluated using flat cardboard models representing different skin colors (pale, medium, and dark) at distances ranging from 0.5 to 1.2 m and measurement angles of up to 75°. Both sensors demonstrated high accuracy (majority of measurements Kinect I is capable of imaging at shorter measurement distances, but Kinect II enables structures angled at over 60° to be evaluated. Kinect II showed significantly higher precision and Kinect I showed significantly higher resolution (both p Kinect is not a medical imaging device, both sensor generations show performance adequate for a range of healthcare imaging applications. Kinect I is more appropriate for short-range imaging and Kinect II is more appropriate for imaging highly curved surfaces such as the face or breast.

  13. MO-DE-210-06: Development of a Supercompounded 3D Volumetric Ultrasound Image Guidance System for Prone Accelerated Partial Breast Irradiation (APBI)

    Energy Technology Data Exchange (ETDEWEB)

    Chiu, T; Hrycushko, B; Zhao, B; Jiang, S; Gu, X [UT Southwestern Medical Center, Dallas, TX (United States)

    2015-06-15

    Purpose: For early-stage breast cancer, accelerated partial breast irradiation (APBI) is a cost-effective breast-conserving treatment. Irradiation in a prone position can mitigate respiratory induced breast movement and achieve maximal sparing of heart and lung tissues. However, accurate dose delivery is challenging due to breast deformation and lumpectomy cavity shrinkage. We propose a 3D volumetric ultrasound (US) image guidance system for accurate prone APBI Methods: The designed system, set beneath the prone breast board, consists of a water container, an US scanner, and a two-layer breast immobilization cup. The outer layer of the breast cup forms the inner wall of water container while the inner layer is attached to patient breast directly to immobilization. The US transducer scans is attached to the outer-layer of breast cup at the dent of water container. Rotational US scans in a transverse plane are achieved by simultaneously rotating water container and transducer, and multiple transverse scanning forms a 3D scan. A supercompounding-technique-based volumetric US reconstruction algorithm is developed for 3D image reconstruction. The performance of the designed system is evaluated with two custom-made gelatin phantoms containing several cylindrical inserts filled in with water (11% reflection coefficient between materials). One phantom is designed for positioning evaluation while the other is for scaling assessment. Results: In the positioning evaluation phantom, the central distances between the inserts are 15, 20, 30 and 40 mm. The distances on reconstructed images differ by −0.19, −0.65, −0.11 and −1.67 mm, respectively. In the scaling evaluation phantom, inserts are 12.7, 19.05, 25.40 and 31.75 mm in diameter. Measured inserts’ sizes on images differed by 0.23, 0.19, −0.1 and 0.22 mm, respectively. Conclusion: The phantom evaluation results show that the developed 3D volumetric US system can accurately localize target position and determine

  14. 3-D Reconstruction From 2-D Radiographic Images and Its Application to Clinical Veterinary Medicine

    Science.gov (United States)

    Hamamoto, Kazuhiko; Sato, Motoyoshi

    3D imaging technique is very important and indispensable in diagnosis. The main stream of the technique is one in which 3D image is reconstructed from a set of slice images, such as X-ray CT and MRI. However, these systems require large space and high costs. On the other hand, a low cost and small size 3D imaging system is needed in clinical veterinary medicine, for example, in the case of diagnosis in X-ray car or pasture area. We propose a novel 3D imaging technique using 2-D X-ray radiographic images. This system can be realized by cheaper system than X-ray CT and enables to get 3D image in X-ray car or portable X-ray equipment. In this paper, a 3D visualization technique from 2-D radiographic images is proposed and several reconstructions are shown. These reconstructions are evaluated by veterinarians.

  15. Optimal Point Spread Function Design for 3D Imaging

    Science.gov (United States)

    Shechtman, Yoav; Sahl, Steffen J.; Backer, Adam S.; Moerner, W. E.

    2015-01-01

    To extract from an image of a single nanoscale object maximum physical information about its position, we propose and demonstrate a framework for pupil-plane modulation for 3D imaging applications requiring precise localization, including single-particle tracking and super-resolution microscopy. The method is based on maximizing the information content of the system, by formulating and solving the appropriate optimization problem – finding the pupil-plane phase pattern that would yield a PSF with optimal Fisher information properties. We use our method to generate and experimentally demonstrate two example PSFs: one optimized for 3D localization precision over a 3 μm depth of field, and another with an unprecedented 5 μm depth of field, both designed to perform under physically common conditions of high background signals. PMID:25302889

  16. 3D Objects Reconstruction from Image Data

    OpenAIRE

    Cír, Filip

    2008-01-01

    Tato práce se zabývá 3D rekonstrukcí z obrazových dat. Jsou popsány možnosti a přístupy k optickému skenování. Ruční optický 3D skener se skládá z kamery a zdroje čárového laseru, který je vzhledem ke kameře upevněn pod určitým úhlem. Je navržena vhodná podložka se značkami a je popsán algoritmus pro jejich real-time detekci. Po detekci značek lze vypočítat pozici a orientaci kamery. Na závěr je popsána detekce laseru a postup při výpočtu bodů na povrchu objektu pomocí triangulace. This pa...

  17. Monocular 3D display unit using soft actuator for parallax image shift

    Science.gov (United States)

    Sakamoto, Kunio; Kodama, Yuuki

    2010-11-01

    The human vision system has visual functions for viewing 3D images with a correct depth. These functions are called accommodation, vergence and binocular stereopsis. Most 3D display system utilizes binocular stereopsis. The authors have developed a monocular 3D vision system with accommodation mechanism, which is useful function for perceiving depth. This vision unit needs an image shift optics for generating monocular parallax images. But conventional image shift mechanism is heavy because of its linear actuator system. To improve this problem, we developed a light-weight 3D vision unit for presenting monocular stereoscopic images using a soft linear actuator made of a polypyrrole film.

  18. A 3D imaging system for the non-intrusive in-flight measurement of the deformation of an aircraft propeller and a helicopter rotor

    Science.gov (United States)

    Stasicki, Bolesław; Boden, Fritz; Ludwikowski, Krzysztof

    2017-02-01

    The non-intrusive in-flight deformation measurement and the resulting local pitch of an aircraft propeller or helicopter rotor blade is a demanding task. The idea of an imaging system integrated and rotating with the air-craft propeller has already been presented at the 30th International Congress on High-Speed Imaging and Photonics (ICHSIP30) in 2012. Since then this system has been designed, constructed and tested in the laboratory as well as in-flight on the Cobra VUT100 of Evektor Aerotechnik, Kunovice (CZ). The major aim of the EU FP7 project AIM2 ("Advanced In-flight Measurement techniques 2" - contract No. 266107) was to ascertain the feasibility of this technique under extreme conditions - vibration and large centrifugal forces - to real flight testing. Based on the gained experience a new rotating system for the application on helicopter rotors has recently been constructed and tested on the whirl tower of Airbus Helicopters, Donauwoerth (D). In this paper the principle of the applied Image Pattern Correlation Technique (IPCT), a specialized type of Digital Image Correlation (DIC), is outlined and the construction of both rotating 3D image acquisition systems dedicated to the in-flight deformation measurement of the aircraft propeller and helicopter rotor are described. Furthermore, the results of the ground and in-flight tests of these systems will be shown and discussed. The obtained results will be helpful for manufacturers in the design of their future aircrafts.

  19. 3D Beam Reconstruction by Fluorescence Imaging

    CERN Document Server

    Radwell, Neal; Franke-Arnold, Sonja

    2013-01-01

    We present a technique for mapping the complete 3D spatial intensity profile of a laser beam from its fluorescence in an atomic vapour. We propagate shaped light through a rubidium vapour cell and record the resonant scattering from the side. From a single measurement we obtain a camera limited resolution of 200 x 200 transverse points and 659 longitudinal points. In constrast to invasive methods in which the camera is placed in the beam path, our method is capable of measuring patterns formed by counterpropagating laser beams. It has high resolution in all 3 dimensions, is fast and can be completely automated. The technique has applications in areas which require complex beam shapes, such as optical tweezers, atom trapping and pattern formation.

  20. De la manipulation des images 3D

    Directory of Open Access Journals (Sweden)

    Geneviève Pinçon

    2012-04-01

    Full Text Available Si les technologies 3D livrent un enregistrement précis et pertinent des graphismes pariétaux, elles offrent également des applications particulièrement intéressantes pour leur analyse. À travers des traitements sur nuage de points et des simulations, elles autorisent un large éventail de manipulations touchant autant à l’observation qu’à l’étude des œuvres pariétales. Elles permettent notamment une perception affinée de leur volumétrie, et deviennent des outils de comparaison de formes très utiles dans la reconstruction des chronologies pariétales et dans l’appréhension des analogies entre différents sites. Ces outils analytiques sont ici illustrés par les travaux originaux menés sur les sculptures pariétales des abris du Roc-aux-Sorciers (Angles-sur-l’Anglin, Vienne et de la Chaire-à-Calvin (Mouthiers-sur-Boëme, Charente.If 3D technologies allow an accurate and relevant recording of rock art, they also offer several interesting applications for its analysis. Through spots clouds treatments and simulations, they permit a wide range of manipulations concerning figurations observation and study. Especially, they allow a fine perception of their volumetry. They become efficient tools for forms comparisons, very useful in the reconstruction of graphic ensemble chronologies and for inter-sites analogies. These analytical tools are illustrated by the original works done on the sculptures of Roc-aux-Sorciers (Angles-sur-l’Anglin, Vienne and Chaire-à-Calvin (Mouthiers-sur-Boëme, Charente rock-shelters.

  1. 3D augmented reality with integral imaging display

    Science.gov (United States)

    Shen, Xin; Hua, Hong; Javidi, Bahram

    2016-06-01

    In this paper, a three-dimensional (3D) integral imaging display for augmented reality is presented. By implementing the pseudoscopic-to-orthoscopic conversion method, elemental image arrays with different capturing parameters can be transferred into the identical format for 3D display. With the proposed merging algorithm, a new set of elemental images for augmented reality display is generated. The newly generated elemental images contain both the virtual objects and real world scene with desired depth information and transparency parameters. The experimental results indicate the feasibility of the proposed 3D augmented reality with integral imaging.

  2. 3D Lunar Terrain Reconstruction from Apollo Images

    Science.gov (United States)

    Broxton, Michael J.; Nefian, Ara V.; Moratto, Zachary; Kim, Taemin; Lundy, Michael; Segal, Alkeksandr V.

    2009-01-01

    Generating accurate three dimensional planetary models is becoming increasingly important as NASA plans manned missions to return to the Moon in the next decade. This paper describes a 3D surface reconstruction system called the Ames Stereo Pipeline that is designed to produce such models automatically by processing orbital stereo imagery. We discuss two important core aspects of this system: (1) refinement of satellite station positions and pose estimates through least squares bundle adjustment; and (2) a stochastic plane fitting algorithm that generalizes the Lucas-Kanade method for optimal matching between stereo pair images.. These techniques allow us to automatically produce seamless, highly accurate digital elevation models from multiple stereo image pairs while significantly reducing the influence of image noise. Our technique is demonstrated on a set of 71 high resolution scanned images from the Apollo 15 mission

  3. Construction and tests of demonstrator modules for a 3-D axial PET system for brain or small animal imaging

    CERN Document Server

    Chesi, E; Clinthorne, N; Pauss, P; Meddi, F; Beltrame, P; Kagan, H; Braem, A; Casella, C; Djambazov, G; Smith, S; Johnson, I; Lustermann, W; Weilhammer, P; Nessi-Tedaldi, F; Dissertori, G; Renker, D; Schneider, T; Schinzel, D; Honscheid, K; De Leo, R; Bolle, E; Fanti, V; Rafecas, M; Cochran, E; Rudge, A; Stapnes, S; Huh, S; Seguinot, J; Solevi, P; Joram, C; Oliver, J F

    2011-01-01

    The design and construction of a PET camera module with high sensitivity, full 3-D spatial reconstruction and very good energy resolution is presented. The basic principle consists of an axial arrangement of long scintillation crystals around the Field Of View (FOV), providing a measurement of the transverse coordinates of the interacting 511 keV gamma ray. On top of each layer of crystals, an array of Wave-Length Shifter (WLS) strips, which collect the light leaving the crystals sideways, is positioned orthogonal to the crystal direction. The signals in the WLS strips allow a precise measurement of the z (axial) co-ordinate of the 511 keV gamma-ray gamma impact. The construction of two modules used for demonstration of the concept is described. First preliminary results on spatial and energy resolution from one full module will be shown. (C) 2010 Elsevier B.V. All rights reserved.

  4. Use of INSAT-3D sounder and imager radiances in the 4D-VAR data assimilation system and its implications in the analyses and forecasts

    Science.gov (United States)

    Indira Rani, S.; Taylor, Ruth; George, John P.; Rajagopal, E. N.

    2016-05-01

    INSAT-3D, the first Indian geostationary satellite with sounding capability, provides valuable information over India and the surrounding oceanic regions which are pivotal to Numerical Weather Prediction. In collaboration with UK Met Office, NCMRWF developed the assimilation capability of INSAT-3D Clear Sky Brightness Temperature (CSBT), both from the sounder and imager, in the 4D-Var assimilation system being used at NCMRWF. Out of the 18 sounder channels, radiances from 9 channels are selected for assimilation depending on relevance of the information in each channel. The first three high peaking channels, the CO2 absorption channels and the three water vapor channels (channel no. 10, 11, and 12) are assimilated both over land and Ocean, whereas the window channels (channel no. 6, 7, and 8) are assimilated only over the Ocean. Measured satellite radiances are compared with that from short range forecasts to monitor the data quality. This is based on the assumption that the observed satellite radiances are free from calibration errors and the short range forecast provided by NWP model is free from systematic errors. Innovations (Observation - Forecast) before and after the bias correction are indicative of how well the bias correction works. Since the biases vary with air-masses, time, scan angle and also due to instrument degradation, an accurate bias correction algorithm for the assimilation of INSAT-3D sounder radiance is important. This paper discusses the bias correction methods and other quality controls used for the selected INSAT-3D sounder channels and the impact of bias corrected radiance in the data assimilation system particularly over India and surrounding oceanic regions.

  5. Method for the determination of the modulation transfer function (MTF) in 3D x-ray imaging systems with focus on correction for finite extent of test objects

    Science.gov (United States)

    Schäfer, Dirk; Wiegert, Jens; Bertram, Matthias

    2007-03-01

    It is well known that rotational C-arm systems are capable of providing 3D tomographic X-ray images with much higher spatial resolution than conventional CT systems. Using flat X-ray detectors, the pixel size of the detector typically is in the range of the size of the test objects. Therefore, the finite extent of the "point" source cannot be neglected for the determination of the MTF. A practical algorithm has been developed that includes bias estimation and subtraction, averaging in the spatial domain, and correction for the frequency content of the imaged bead or wire. Using this algorithm, the wire and the bead method are analyzed for flat detector based 3D X-ray systems with the use of standard CT performance phantoms. Results on both experimental and simulated data are presented. It is found that the approximation of applying the analysis of the wire method to a bead measurement is justified within 3% accuracy up to the first zero of the MTF.

  6. Calibration of Images with 3D range scanner data

    OpenAIRE

    Adalid López, Víctor Javier

    2009-01-01

    Projecte fet en col.laboració amb EPFL 3D laser range scanners are used in extraction of the 3D data in a scene. Main application areas are architecture, archeology and city planning. Thought the raw scanner data has a gray scale values, the 3D data can be merged with colour camera image values to get textured 3D model of the scene. Also these devices are able to take a reliable copy in 3D form objects, with a high level of accuracy. Therefore, they scanned scenes can be use...

  7. 3D Ground Penetrating Imaging Radar

    OpenAIRE

    ECT Team, Purdue

    2007-01-01

    GPiR (ground-penetrating imaging radar) is a new technology for mapping the shallow subsurface, including society’s underground infrastructure. Applications for this technology include efficient and precise mapping of buried utilities on a large scale.

  8. 3D Reconstruction of NMR Images by LabVIEW

    Directory of Open Access Journals (Sweden)

    Peter IZAK

    2007-01-01

    Full Text Available This paper introduces the experiment of 3D reconstruction NMR images via virtual instrumentation - LabVIEW. The main idea is based on marching cubes algorithm and image processing implemented by module of Vision assistant. The two dimensional images shot by the magnetic resonance device provide information about the surface properties of human body. There is implemented algorithm which can be used for 3D reconstruction of magnetic resonance images in biomedical application.

  9. 3D Multifunctional Ablative Thermal Protection System

    Science.gov (United States)

    Feldman, Jay; Venkatapathy, Ethiraj; Wilkinson, Curt; Mercer, Ken

    2015-01-01

    NASA is developing the Orion spacecraft to carry astronauts farther into the solar system than ever before, with human exploration of Mars as its ultimate goal. One of the technologies required to enable this advanced, Apollo-shaped capsule is a 3-dimensional quartz fiber composite for the vehicle's compression pad. During its mission, the compression pad serves first as a structural component and later as an ablative heat shield, partially consumed on Earth re-entry. This presentation will summarize the development of a new 3D quartz cyanate ester composite material, 3-Dimensional Multifunctional Ablative Thermal Protection System (3D-MAT), designed to meet the mission requirements for the Orion compression pad. Manufacturing development, aerothermal (arc-jet) testing, structural performance, and the overall status of material development for the 2018 EM-1 flight test will be discussed.

  10. 3D Interpolation Method for CT Images of the Lung

    Directory of Open Access Journals (Sweden)

    Noriaki Asada

    2003-06-01

    Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.

  11. 3D measurement system based on computer-generated gratings

    Science.gov (United States)

    Zhu, Yongjian; Pan, Weiqing; Luo, Yanliang

    2010-08-01

    A new kind of 3D measurement system has been developed to achieve the 3D profile of complex object. The principle of measurement system is based on the triangular measurement of digital fringe projection, and the fringes are fully generated from computer. Thus the computer-generated four fringes form the data source of phase-shifting 3D profilometry. The hardware of system includes the computer, video camera, projector, image grabber, and VGA board with two ports (one port links to the screen, another to the projector). The software of system consists of grating projection module, image grabbing module, phase reconstructing module and 3D display module. A software-based synchronizing method between grating projection and image capture is proposed. As for the nonlinear error of captured fringes, a compensating method is introduced based on the pixel-to-pixel gray correction. At the same time, a least square phase unwrapping is used to solve the problem of phase reconstruction by using the combination of Log Modulation Amplitude and Phase Derivative Variance (LMAPDV) as weight. The system adopts an algorithm from Matlab Tool Box for camera calibration. The 3D measurement system has an accuracy of 0.05mm. The execution time of system is 3~5s for one-time measurement.

  12. Visualizing Vertebrate Embryos with Episcopic 3D Imaging Techniques

    Directory of Open Access Journals (Sweden)

    Stefan H. Geyer

    2009-01-01

    Full Text Available The creation of highly detailed, three-dimensional (3D computer models is essential in order to understand the evolution and development of vertebrate embryos, and the pathogenesis of hereditary diseases. A still-increasing number of methods allow for generating digital volume data sets as the basis of virtual 3D computer models. This work aims to provide a brief overview about modern volume data–generation techniques, focusing on episcopic 3D imaging methods. The technical principles, advantages, and problems of episcopic 3D imaging are described. The strengths and weaknesses in its ability to visualize embryo anatomy and labeled gene product patterns, specifically, are discussed.

  13. Compression of 3D integral images using wavelet decomposition

    Science.gov (United States)

    Mazri, Meriem; Aggoun, Amar

    2003-06-01

    This paper presents a wavelet-based lossy compression technique for unidirectional 3D integral images (UII). The method requires the extraction of different viewpoint images from the integral image. A single viewpoint image is constructed by extracting one pixel from each microlens, then each viewpoint image is decomposed using a Two Dimensional Discrete Wavelet Transform (2D-DWT). The resulting array of coefficients contains several frequency bands. The lower frequency bands of the viewpoint images are assembled and compressed using a 3 Dimensional Discrete Cosine Transform (3D-DCT) followed by Huffman coding. This will achieve decorrelation within and between 2D low frequency bands from the different viewpoint images. The remaining higher frequency bands are Arithmetic coded. After decoding and decompression of the viewpoint images using an inverse 3D-DCT and an inverse 2D-DWT, each pixel from every reconstructed viewpoint image is put back into its original position within the microlens to reconstruct the whole 3D integral image. Simulations were performed on a set of four different grey level 3D UII using a uniform scalar quantizer with deadzone. The results for the average of the four UII intensity distributions are presented and compared with previous use of 3D-DCT scheme. It was found that the algorithm achieves better rate-distortion performance, with respect to compression ratio and image quality at very low bit rates.

  14. Highway 3D model from image and lidar data

    Science.gov (United States)

    Chen, Jinfeng; Chu, Henry; Sun, Xiaoduan

    2014-05-01

    We present a new method of highway 3-D model construction developed based on feature extraction in highway images and LIDAR data. We describe the processing road coordinate data that connect the image frames to the coordinates of the elevation data. Image processing methods are used to extract sky, road, and ground regions as well as significant objects (such as signs and building fronts) in the roadside for the 3D model. LIDAR data are interpolated and processed to extract the road lanes as well as other features such as trees, ditches, and elevated objects to form the 3D model. 3D geometry reasoning is used to match the image features to the 3D model. Results from successive frames are integrated to improve the final model.

  15. 3D "spectracoustic" system: a modular, tomographic, spectroscopic mapping imaging, non-invasive, diagnostic system for detection of small starting developing tumors like melanoma

    Science.gov (United States)

    Karagiannis, Georgios

    2017-03-01

    This work led to a new method named 3D spectracoustic tomographic mapping imaging. The current and the future work is related to the fabrication of a combined acoustic microscopy transducer and infrared illumination probe permitting the simultaneous acquisition of the spectroscopic and the tomographic information. This probe provides with the capability of high fidelity and precision registered information from the combined modalities named spectracoustic information.

  16. Touchless control module for diagnostic images at the surgery room using the Leap Motion system and 3D Slicer Software

    Directory of Open Access Journals (Sweden)

    Andrés Felipe Botero-Ospina

    2017-01-01

    Full Text Available Durante los procedimientos quirúrgicos es importante que el personal (cirujanos, residentes o asistentes interactúe con el paciente, evitando cualquier contacto físico con equipo y materiales que pudieron no ser esterilizados apropiadamente. Esto se hace con el fin de evitar al paciente infecciones y complicaciones posteriores a la cirugía. Con el aumento de la disponibilidad de imágenes diagnósticas esta herramienta se ha hecho cada vez más indispensable en los quirófanos, pero no siempre es posible mantener el control de asepsia de los equipos informáticos en los cuales se ejecutan los programas de visualización, factor que dificulta el acceso al personal asistencial a la información contenida en las imágenes. En este trabajo se presenta el desarrollo de un sistema que permite manipular un programa de visualización de imágenes diagnósticas mediante gestos evitando que el cirujano tenga contacto directo con la computadora. El sistema, que requiere una computadora con el software 3D-Slicer y el dispositivo Leap Motion, permite mediante gestos realizados con las manos acceder a operaciones básicas como el movimiento entre cortes de un volumen, cambio del tamaño de la imagen y cambio del plano anatómico de visualización, operaciones que para el cirujano son esenciales para la ubicación espacial y la toma de decisiones.

  17. Acoustic 3D imaging of dental structures

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, D.K. [Lawrence Livermore National Lab., CA (United States); Hume, W.R. [California Univ., Los Angeles, CA (United States); Douglass, G.D. [California Univ., San Francisco, CA (United States)

    1997-02-01

    Our goals for the first year of this three dimensional electodynamic imaging project was to determine how to combine flexible, individual addressable; preprocessing of array source signals; spectral extrapolation or received signals; acoustic tomography codes; and acoustic propagation modeling code. We investigated flexible, individually addressable acoustic array material to find the best match in power, sensitivity and cost and settled on PVDF sheet arrays and 3-1 composite material.

  18. Feature detection on 3D images of dental imprints

    Science.gov (United States)

    Mokhtari, Marielle; Laurendeau, Denis

    1994-09-01

    A computer vision approach for the extraction of feature points on 3D images of dental imprints is presented. The position of feature points are needed for the measurement of a set of parameters for automatic diagnosis of malocclusion problems in orthodontics. The system for the acquisition of the 3D profile of the imprint, the procedure for the detection of the interstices between teeth, and the approach for the identification of the type of tooth are described, as well as the algorithm for the reconstruction of the surface of each type of tooth. A new approach for the detection of feature points, called the watershed algorithm, is described in detail. The algorithm is a two-stage procedure which tracks the position of local minima at four different scales and produces a final map of the position of the minima. Experimental results of the application of the watershed algorithm on actual 3D images of dental imprints are presented for molars, premolars and canines. The segmentation approach for the analysis of the shape of incisors is also described in detail.

  19. Increasing the effective aperture of a detector and enlarging the receiving field of view in a 3D imaging lidar system through hexagonal prism beam splitting.

    Science.gov (United States)

    Lee, Xiaobao; Wang, Xiaoyi; Cui, Tianxiang; Wang, Chunhui; Li, Yunxi; Li, Hailong; Wang, Qi

    2016-07-11

    The detector in a highly accurate and high-definition scanning 3D imaging lidar system requires high frequency bandwidth and sufficient photosensitive area. To solve the problem of small photosensitive area of an existing indium gallium arsenide detector with a certain frequency bandwidth, this study proposes a method for increasing the receiving field of view (FOV) and enlarging the effective photosensitive aperture of such detector through hexagonal prism beam splitting. The principle and construction of hexagonal prism beam splitting is also discussed in this research. Accordingly, a receiving optical system with two hexagonal prisms is provided and the splitting beam effect of the simulation experiment is analyzed. Using this novel method, the receiving optical system's FOV can be improved effectively up to ±5°, and the effective photosensitive aperture of the detector is increased from 0.5 mm to 1.5 mm.

  20. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Energy Technology Data Exchange (ETDEWEB)

    Paglieroni, D

    2005-02-04

    Locating specific 3D objects in overhead images is an important problem in many remote sensing applications. 3D objects may contain either one connected component or multiple disconnected components. Solutions must accommodate images acquired with diverse sensors at various times of the day, in various seasons of the year, or under various weather conditions. Moreover, the physical manifestation of a 3D object with fixed physical dimensions in an overhead image is highly dependent on object physical dimensions, object position/orientation, image spatial resolution, and imaging geometry (e.g., obliqueness). This paper describes a two-stage computer-assisted approach for locating 3D objects in overhead images. In the matching stage, the computer matches models of 3D objects to overhead images. The strongest degree of match over all object orientations is computed at each pixel. Unambiguous local maxima in the degree of match as a function of pixel location are then found. In the cueing stage, the computer sorts image thumbnails in descending order of figure-of-merit and presents them to human analysts for visual inspection and interpretation. The figure-of-merit associated with an image thumbnail is computed from the degrees of match to a 3D object model associated with unambiguous local maxima that lie within the thumbnail. This form of computer assistance is invaluable when most of the relevant thumbnails are highly ranked, and the amount of inspection time needed is much less for the highly ranked thumbnails than for images as a whole.

  1. 3D laser imaging for concealed object identification

    Science.gov (United States)

    Berechet, Ion; Berginc, Gérard; Berechet, Stefan

    2014-09-01

    This paper deals with new optical non-conventional 3D laser imaging. Optical non-conventional imaging explores the advantages of laser imaging to form a three-dimensional image of the scene. 3D laser imaging can be used for threedimensional medical imaging, topography, surveillance, robotic vision because of ability to detect and recognize objects. In this paper, we present a 3D laser imaging for concealed object identification. The objective of this new 3D laser imaging is to provide the user a complete 3D reconstruction of the concealed object from available 2D data limited in number and with low representativeness. The 2D laser data used in this paper come from simulations that are based on the calculation of the laser interactions with the different interfaces of the scene of interest and from experimental results. We show the global 3D reconstruction procedures capable to separate objects from foliage and reconstruct a threedimensional image of the considered object. In this paper, we present examples of reconstruction and completion of three-dimensional images and we analyse the different parameters of the identification process such as resolution, the scenario of camouflage, noise impact and lacunarity degree.

  2. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance.

    Science.gov (United States)

    Dibildox, Gerardo; Baka, Nora; Punt, Mark; Aben, Jean-Paul; Schultz, Carl; Niessen, Wiro; van Walsum, Theo

    2014-09-01

    The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P>0.1) but did improve robustness with regards to the initialization of the 3D models. The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  3. 3D/3D registration of coronary CTA and biplane XA reconstructions for improved image guidance

    Energy Technology Data Exchange (ETDEWEB)

    Dibildox, Gerardo, E-mail: g.dibildox@erasmusmc.nl; Baka, Nora; Walsum, Theo van [Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Punt, Mark; Aben, Jean-Paul [Pie Medical Imaging, 6227 AJ Maastricht (Netherlands); Schultz, Carl [Department of Cardiology, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands); Niessen, Wiro [Quantitative Imaging Group, Faculty of Applied Sciences, Delft University of Technology, 2628 CJ Delft, The Netherlands and Biomedical Imaging Group Rotterdam, Departments of Radiology and Medical Informatics, Erasmus Medical Center, 3015 GE Rotterdam (Netherlands)

    2014-09-15

    Purpose: The authors aim to improve image guidance during percutaneous coronary interventions of chronic total occlusions (CTO) by providing information obtained from computed tomography angiography (CTA) to the cardiac interventionist. To this end, the authors investigate a method to register a 3D CTA model to biplane reconstructions. Methods: The authors developed a method for registering preoperative coronary CTA with intraoperative biplane x-ray angiography (XA) images via 3D models of the coronary arteries. The models are extracted from the CTA and biplane XA images, and are temporally aligned based on CTA reconstruction phase and XA ECG signals. Rigid spatial alignment is achieved with a robust probabilistic point set registration approach using Gaussian mixture models (GMMs). This approach is extended by including orientation in the Gaussian mixtures and by weighting bifurcation points. The method is evaluated on retrospectively acquired coronary CTA datasets of 23 CTO patients for which biplane XA images are available. Results: The Gaussian mixture model approach achieved a median registration accuracy of 1.7 mm. The extended GMM approach including orientation was not significantly different (P > 0.1) but did improve robustness with regards to the initialization of the 3D models. Conclusions: The authors demonstrated that the GMM approach can effectively be applied to register CTA to biplane XA images for the purpose of improving image guidance in percutaneous coronary interventions.

  4. Landmine detection by 3D GPR system

    Science.gov (United States)

    Sato, Motoyuki; Yokota, Yuya; Takahashi, Kazunori; Grasmueck, Mark

    2012-06-01

    In order to demonstrate the possibility of Ground Penetrating Radar (GPR) for detection of small buried objects such as landmine and UXO, conducted demonstration tests by using the 3DGPR system, which is a GPR system combined with high accuracy positing system using a commercial laser positioning system (iGPS). iGPS can provide absolute and better than centimetre precise x,y,z coordinates to multiple mine sensors at the same time. The developed " 3DGPR" system is efficient and capable of high-resolution 3D shallow subsurface scanning of larger areas (25 m2 to thousands of square meters) with irregular topography . Field test by using a 500MHz GPR system equipped with 3DGPR system was conducted. PMN-2 and Type-72 mine models have been buried at the depth of 5-20cm in sand. We could demonstrate that the 3DGPR can visualize each of these buried land mines very clearly.

  5. Reconstruction of High Resolution 3D Objects from Incomplete Images and 3D Information

    Directory of Open Access Journals (Sweden)

    Alexander Pacheco

    2014-05-01

    Full Text Available To this day, digital object reconstruction is a quite complex area that requires many techniques and novel approaches, in which high-resolution 3D objects present one of the biggest challenges. There are mainly two different methods that can be used to reconstruct high resolution objects and images: passive methods and active methods. This methods depend on the type of information available as input for modeling 3D objects. The passive methods use information contained in the images and the active methods make use of controlled light sources, such as lasers. The reconstruction of 3D objects is quite complex and there is no unique solution- The use of specific methodologies for the reconstruction of certain objects it’s also very common, such as human faces, molecular structures, etc. This paper proposes a novel hybrid methodology, composed by 10 phases that combine active and passive methods, using images and a laser in order to supplement the missing information and obtain better results in the 3D object reconstruction. Finally, the proposed methodology proved its efficiency in two complex topological complex objects.

  6. Morphometrics, 3D Imaging, and Craniofacial Development

    Science.gov (United States)

    Hallgrimsson, Benedikt; Percival, Christopher J.; Green, Rebecca; Young, Nathan M.; Mio, Washington; Marcucio, Ralph

    2017-01-01

    Recent studies have shown how volumetric imaging and morphometrics can add significantly to our understanding of morphogenesis, the developmental basis for variation and the etiology of structural birth defects. On the other hand, the complex questions and diverse imaging data in developmental biology present morphometrics with more complex challenges than applications in virtually any other field. Meeting these challenges is necessary in order to understand the mechanistic basis for variation in complex morphologies. This chapter reviews the methods and theory that enable the application of modern landmark-based morphometrics to developmental biology and craniofacial development, in particular. We discuss the theoretical foundations of morphometrics as applied to development and review the basic approaches to the quantification of morphology. Focusing on geometric morphometrics, we discuss the principal statistical methods for quantifying and comparing morphological variation and covariation structure within and among groups. Finally, we discuss the future directions for morphometrics in developmental biology that will be required for approaches that enable quantitative integration across the genotype-phenotype map. PMID:26589938

  7. New Method for 2D Image Edge-detection in Layer-layer 3D Testing System

    Institute of Scientific and Technical Information of China (English)

    YANG Yu-xiao; XIONG Kai-li; ZHOU Jian; ZHAO Ming-tao; TAN Yu-shan

    2003-01-01

    A new method based on the material removal and cross-section optical scanning is investigated.The advantage of this method is that the internal and external information of the specimen can be obtained at same precision. In order to eliminate the pulse and Gaussian noise, the multi-scale dyadic wavelet methods are presented and discussed. The experimental results show that the multi-scale dyadic wavelet methods can successfully extract the features from noise image.The accuracy of 2D edge detection is 5.4 μm with the resolution of 2.7 μm.

  8. FELIX 3D display: an interactive tool for volumetric imaging

    Science.gov (United States)

    Langhans, Knut; Bahr, Detlef; Bezecny, Daniel; Homann, Dennis; Oltmann, Klaas; Oltmann, Krischan; Guill, Christian; Rieper, Elisabeth; Ardey, Goetz

    2002-05-01

    The FELIX 3D display belongs to the class of volumetric displays using the swept volume technique. It is designed to display images created by standard CAD applications, which can be easily imported and interactively transformed in real-time by the FELIX control software. The images are drawn on a spinning screen by acousto-optic, galvanometric or polygon mirror deflection units with integrated lasers and a color mixer. The modular design of the display enables the user to operate with several equal or different projection units in parallel and to use appropriate screens for the specific purpose. The FELIX 3D display is a compact, light, extensible and easy to transport system. It mainly consists of inexpensive standard, off-the-shelf components for an easy implementation. This setup makes it a powerful and flexible tool to keep track with the rapid technological progress of today. Potential applications include imaging in the fields of entertainment, air traffic control, medical imaging, computer aided design as well as scientific data visualization.

  9. 3D flash lidar imager onboard UAV

    Science.gov (United States)

    Zhou, Guoqing; Liu, Yilong; Yang, Jiazhi; Zhang, Rongting; Su, Chengjie; Shi, Yujun; Zhou, Xiang

    2014-11-01

    A new generation of flash LiDAR sensor called GLidar-I is presented in this paper. The GLidar-I has been being developed by Guilin University of Technology in cooperating with the Guilin Institute of Optical Communications. The GLidar-I consists of control and process system, transmitting system and receiving system. Each of components has been designed and implemented. The test, experiments and validation for each component have been conducted. The experimental results demonstrate that the researched and developed GLiDAR-I can effectively measure the distance about 13 m at the accuracy level about 11cm in lab.

  10. 3D Shape Indexing and Retrieval Using Characteristics level images

    Directory of Open Access Journals (Sweden)

    Abdelghni Lakehal

    2012-05-01

    Full Text Available In this paper, we propose an improved version of the descriptor that we proposed before. The descriptor is based on a set of binary images extracted from the 3D model called level images noted LI. The set LI is often bulky, why we introduced the X-means technique to reduce its size instead of K-means used in the old version. A 2D binary image descriptor was introduced to extract the vectors descriptors of the 3D model. For a comparative study of two versions of the descriptor, we used the National Taiwan University (NTU database of 3D object.

  11. Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization

    Science.gov (United States)

    2014-05-01

    1 Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization David N. Ford...2014 4. TITLE AND SUBTITLE Potential Cost Savings with 3D Printing Combined With 3D Imaging and CPLM for Fleet Maintenance and Revitalization 5a...Manufacturing ( 3D printing ) 2 Research Context Problem: Learning curve savings forecasted in SHIPMAIN maintenance initiative have not materialized

  12. Fully 3D refraction correction dosimetry system

    Science.gov (United States)

    Manjappa, Rakesh; Sharath Makki, S.; Kumar, Rajesh; Mohan Vasu, Ram; Kanhirodan, Rajan

    2016-02-01

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  13. Fully 3D refraction correction dosimetry system.

    Science.gov (United States)

    Manjappa, Rakesh; Makki, S Sharath; Kumar, Rajesh; Vasu, Ram Mohan; Kanhirodan, Rajan

    2016-02-21

    The irradiation of selective regions in a polymer gel dosimeter results in an increase in optical density and refractive index (RI) at those regions. An optical tomography-based dosimeter depends on rayline path through the dosimeter to estimate and reconstruct the dose distribution. The refraction of light passing through a dose region results in artefacts in the reconstructed images. These refraction errors are dependant on the scanning geometry and collection optics. We developed a fully 3D image reconstruction algorithm, algebraic reconstruction technique-refraction correction (ART-rc) that corrects for the refractive index mismatches present in a gel dosimeter scanner not only at the boundary, but also for any rayline refraction due to multiple dose regions inside the dosimeter. In this study, simulation and experimental studies have been carried out to reconstruct a 3D dose volume using 2D CCD measurements taken for various views. The study also focuses on the effectiveness of using different refractive-index matching media surrounding the gel dosimeter. Since the optical density is assumed to be low for a dosimeter, the filtered backprojection is routinely used for reconstruction. We carry out the reconstructions using conventional algebraic reconstruction (ART) and refractive index corrected ART (ART-rc) algorithms. The reconstructions based on FDK algorithm for cone-beam tomography has also been carried out for comparison. Line scanners and point detectors, are used to obtain reconstructions plane by plane. The rays passing through dose region with a RI mismatch does not reach the detector in the same plane depending on the angle of incidence and RI. In the fully 3D scanning setup using 2D array detectors, light rays that undergo refraction are still collected and hence can still be accounted for in the reconstruction algorithm. It is found that, for the central region of the dosimeter, the usable radius using ART-rc algorithm with water as RI matched

  14. Automation and Preclinical Evaluation of a Dedicated Emission Mammotomography System for Fully 3-D Molecular Breast Imaging

    Science.gov (United States)

    2009-10-01

    right) and digital flat - panel detector (left). The colored arrows illustrate system motions (azimuthal for SPECT and CT, and polar and ROR for the SPECT...digital flat - panel detector (left). The colored arrows illustrate system motions (azimuthal for SPECT and CT, and polar and ROR for the SPECT subsystem...of scintillator based compact, quantized detector element gamma cameras (30% wide). Furthermore, early contrast-detail observer studies with this

  15. Weed detection in 3D images

    NARCIS (Netherlands)

    Piron, A.; Heijden, van der F.; Destain, M.F.

    2011-01-01

    Machine vision has been successfully used for mechanical destruction of weeds between rows of crops. Knowledge of the position of the rows where crops should be growing and the assumption that plants growing outside such positions are weeds may be used in such systems. However for many horticultural

  16. Development of 3D microwave imaging reflectometry in LHD (invited).

    Science.gov (United States)

    Nagayama, Y; Kuwahara, D; Yoshinaga, T; Hamada, Y; Kogi, Y; Mase, A; Tsuchiya, H; Tsuji-Iio, S; Yamaguchi, S

    2012-10-01

    Three-dimensional (3D) microwave imaging reflectometry has been developed in the large helical device to visualize fluctuating reflection surface which is caused by the density fluctuations. The plasma is illuminated by the probe wave with four frequencies, which correspond to four radial positions. The imaging optics makes the image of cut-off surface onto the 2D (7 × 7 channels) horn antenna mixer arrays. Multi-channel receivers have been also developed using micro-strip-line technology to handle many channels at reasonable cost. This system is first applied to observe the edge harmonic oscillation (EHO), which is an MHD mode with many harmonics that appears in the edge plasma. A narrow structure along field lines is observed during EHO.

  17. 3D IMAGING OF INDIVIDUAL PARTICLES: A REVIEW

    Directory of Open Access Journals (Sweden)

    Eric Pirard

    2012-06-01

    Full Text Available In recent years, impressive progress has been made in digital imaging and in particular in three dimensional visualisation and analysis of objects. This paper reviews the most recent literature on three dimensional imaging with a special attention to particulate systems analysis. After an introduction recalling some important concepts in spatial sampling and digital imaging, the paper reviews a series of techniques with a clear distinction between the surfometric and volumetric principles. The literature review is as broad as possible covering materials science as well as biology while keeping an eye on emerging technologies in optics and physics. The paper should be of interest to any scientist trying to picture particles in 3D with the best possible resolution for accurate size and shape estimation. Though techniques are adequate for nanoscopic and microscopic particles, no special size limit has been considered while compiling the review.

  18. Using an Unmanned Aerial Vehicle-Based Digital Imaging System to Derive a 3D Point Cloud for Landslide Scarp Recognition

    Directory of Open Access Journals (Sweden)

    Abdulla Al-Rawabdeh

    2016-01-01

    Full Text Available Landslides often cause economic losses, property damage, and loss of lives. Monitoring landslides using high spatial and temporal resolution imagery and the ability to quickly identify landslide regions are the basis for emergency disaster management. This study presents a comprehensive system that uses unmanned aerial vehicles (UAVs and Semi-Global dense Matching (SGM techniques to identify and extract landslide scarp data. The selected study area is located along a major highway in a mountainous region in Jordan, and contains creeping landslides induced by heavy rainfall. Field observations across the slope body and a deformation analysis along the highway and existing gabions indicate that the slope is active and that scarp features across the slope will continue to open and develop new tension crack features, leading to the downward movement of rocks. The identification of landslide scarps in this study was performed via a dense 3D point cloud of topographic information generated from high-resolution images captured using a low-cost UAV and a target-based camera calibration procedure for a low-cost large-field-of-view camera. An automated approach was used to accurately detect and extract the landslide head scarps based on geomorphological factors: the ratio of normalized Eigenvalues (i.e., λ1/λ2 ≥ λ3 derived using principal component analysis, topographic surface roughness index values, and local-neighborhood slope measurements from the 3D image-based point cloud. Validation of the results was performed using root mean square error analysis and a confusion (error matrix between manually digitized landslide scarps and the automated approaches. The experimental results using the fully automated 3D point-based analysis algorithms show that these approaches can effectively distinguish landslide scarps. The proposed algorithms can accurately identify and extract landslide scarps with centimeter-scale accuracy. In addition, the combination

  19. Image-based 3D canopy reconstruction to determine potential productivity in complex multi-species crop systems.

    Science.gov (United States)

    Burgess, Alexandra J; Retkute, Renata; Pound, Michael P; Mayes, Sean; Murchie, Erik H

    2017-03-01

    Intercropping systems contain two or more species simultaneously in close proximity. Due to contrasting features of the component crops, quantification of the light environment and photosynthetic productivity is extremely difficult. However it is an essential component of productivity. Here, a low-tech but high-resolution method is presented that can be applied to single- and multi-species cropping systems to facilitate characterization of the light environment. Different row layouts of an intercrop consisting of Bambara groundnut ( Vigna subterranea ) and proso millet ( Panicum miliaceum ) have been used as an example and the new opportunities presented by this approach have been analysed. Three-dimensional plant reconstruction, based on stereo cameras, combined with ray tracing was implemented to explore the light environment within the Bambara groundnut-proso millet intercropping system and associated monocrops. Gas exchange data were used to predict the total carbon gain of each component crop. The shading influence of the tall proso millet on the shorter Bambara groundnut results in a reduction in total canopy light interception and carbon gain. However, the increased leaf area index (LAI) of proso millet, higher photosynthetic potential due to the C4 pathway and sub-optimal photosynthetic acclimation of Bambara groundnut to shade means that increasing the number of rows of millet will lead to greater light interception and carbon gain per unit ground area, despite Bambara groundnut intercepting more light per unit leaf area. Three-dimensional reconstruction combined with ray tracing provides a novel, accurate method of exploring the light environment within an intercrop that does not require difficult measurements of light interception and data-intensive manual reconstruction, especially for such systems with inherently high spatial possibilities. It provides new opportunities for calculating potential productivity within multi-species cropping systems

  20. Weed detection in 3D images

    OpenAIRE

    Piron, Alexis; Van der heijden, F.; Destain, Marie-France

    2011-01-01

    Machine vision has been successfully used for mechanical destruction of weeds between rows of crops. Knowledge of the position of the rows where crops should be growing and the assumption that plants growing outside such positions are weeds may be used in such systems. However for many horticultural crops, the automatic removal of weeds from inside a row or bands of crops in which the weeds are mixed with plants in a random manner is not solved. The aim of this study was to verify that plant ...

  1. Stereoscopic contents authoring system for 3D DMB data service

    Science.gov (United States)

    Lee, BongHo; Yun, Kugjin; Hur, Namho; Kim, Jinwoong; Lee, SooIn

    2009-02-01

    This paper presents a stereoscopic contents authoring system that covers the creation and editing of stereoscopic multimedia contents for the 3D DMB (Digital Multimedia Broadcasting) data services. The main concept of 3D DMB data service is that, instead of full 3D video, partial stereoscopic objects (stereoscopic JPEG, PNG and MNG) are stereoscopically displayed on the 2D background video plane. In order to provide stereoscopic objects, we design and implement a 3D DMB content authoring system which provides the convenient and straightforward contents creation and editing functionalities. For the creation of stereoscopic contents, we mainly focused on two methods: CG (Computer Graphics) based creation and real image based creation. In the CG based creation scenario where the generated CG data from the conventional MAYA or 3DS MAX tool is rendered to generate the stereoscopic images by applying the suitable disparity and camera parameters, we use X-file for the direct conversion to stereoscopic objects, so called 3D DMB objects. In the case of real image based creation, the chroma-key method is applied to real video sequences to acquire the alpha-mapped images which are in turn directly converted to stereoscopic objects. The stereoscopic content editing module includes the timeline editor for both the stereoscopic video and stereoscopic objects. For the verification of created stereoscopic contents, we implemented the content verification module to verify and modify the contents by adjusting the disparity. The proposed system will leverage the power of stereoscopic contents creation for mobile 3D data service especially targeted for T-DMB with the capabilities of CG and real image based contents creation, timeline editing and content verification.

  2. Novel System for Real-Time Integration of 3-D Echocardiography and Fluoroscopy for Image-Guided Cardiac Interventions: Preclinical Validation and Clinical Feasibility Evaluation

    Science.gov (United States)

    Housden, R. James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C. Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D.; Razavi, Reza; Rhode, Kawal S.

    2014-01-01

    Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures. PMID:27170872

  3. Image based 3D city modeling : Comparative study

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-06-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing rapidly for various engineering and non-engineering applications. Generally four main image based approaches were used for virtual 3D city models generation. In first approach, researchers were used Sketch based modeling, second method is Procedural grammar based modeling, third approach is Close range photogrammetry based modeling and fourth approach is mainly based on Computer Vision techniques. SketchUp, CityEngine, Photomodeler and Agisoft Photoscan are the main softwares to represent these approaches respectively. These softwares have different approaches & methods suitable for image based 3D city modeling. Literature study shows that till date, there is no complete such type of comparative study available to create complete 3D city model by using images. This paper gives a comparative assessment of these four image based 3D modeling approaches. This comparative study is mainly based on data acquisition methods, data processing techniques and output 3D model products. For this research work, study area is the campus of civil engineering department, Indian Institute of Technology, Roorkee (India). This 3D campus acts as a prototype for city. This study also explains various governing parameters, factors and work experiences. This research work also gives a brief introduction, strengths and weakness of these four image based techniques. Some personal comment is also given as what can do or what can't do from these softwares. At the last, this study shows; it concluded that, each and every software has some advantages and limitations. Choice of software depends on user requirements of 3D project. For normal visualization project, SketchUp software is a good option. For 3D documentation record, Photomodeler gives good result. For Large city

  4. A colour image reproduction framework for 3D colour printing

    Science.gov (United States)

    Xiao, Kaida; Sohiab, Ali; Sun, Pei-li; Yates, Julian M.; Li, Changjun; Wuerger, Sophie

    2016-10-01

    In this paper, the current technologies in full colour 3D printing technology were introduced. A framework of colour image reproduction process for 3D colour printing is proposed. A special focus was put on colour management for 3D printed objects. Two approaches, colorimetric colour reproduction and spectral based colour reproduction are proposed in order to faithfully reproduce colours in 3D objects. Two key studies, colour reproduction for soft tissue prostheses and colour uniformity correction across different orientations are described subsequently. Results are clear shown that applying proposed colour image reproduction framework, performance of colour reproduction can be significantly enhanced. With post colour corrections, a further improvement in colour process are achieved for 3D printed objects.

  5. Parallel computing helps 3D depth imaging, processing

    Energy Technology Data Exchange (ETDEWEB)

    Nestvold, E. O. [IBM, Houston, TX (United States); Su, C. B. [IBM, Dallas, TX (United States); Black, J. L. [Landmark Graphics, Denver, CO (United States); Jack, I. G. [BP Exploration, London (United Kingdom)

    1996-10-28

    The significance of 3D seismic data in the petroleum industry during the past decade cannot be overstated. Having started as a technology too expensive to be utilized except by major oil companies, 3D technology is now routinely used by independent operators in the US and Canada. As with all emerging technologies, documentation of successes has been limited. There are some successes, however, that have been summarized in the literature in the recent past. Key technological developments contributing to this success have been major advances in RISC workstation technology, 3D depth imaging, and parallel computing. This article presents the basic concepts of parallel seismic computing, showing how it impacts both 3D depth imaging and more-conventional 3D seismic processing.

  6. A dynamic 3D foot reconstruction system.

    Science.gov (United States)

    Thabet, Ali K; Trucco, Emanuele; Salvi, Joaquim; Wang, Weijie; Abboud, Rami J

    2011-01-01

    Foot problems are varied and range from simple disorders through to complex diseases and joint deformities. Wherever possible, the use of insoles, or orthoses, is preferred over surgery. Current insole design techniques are based on static measurements of the foot, despite the fact that orthoses are prevalently used in dynamic conditions while walking or running. This paper presents the design and implementation of a structured-light prototype system providing dense three dimensional (3D) measurements of the foot in motion, and its use to show that foot measurements in dynamic conditions differ significantly from their static counterparts. The input to the system is a video sequence of a foot during a single step; the output is a 3D reconstruction of the plantar surface of the foot for each frame of the input. Engineering and clinical tests were carried out for the validation of the system. The accuracy of the system was found to be 0.34 mm with planar test objects. In tests with real feet, the system proved repeatable, with reconstruction differences between trials one week apart averaging 2.44 mm (static case) and 2.81 mm (dynamic case). Furthermore, a study was performed to compare the effective length of the foot between static and dynamic reconstructions using the 4D system. Results showed an average increase of 9 mm for the dynamic case. This increase is substantial for orthotics design, cannot be captured by a static system, and its subject-specific measurement is crucial for the design of effective foot orthoses.

  7. EISCAT Aperture Synthesis Imaging (EASI _3D) for the EISCAT_3D Project

    Science.gov (United States)

    La Hoz, Cesar; Belyey, Vasyl

    2012-07-01

    Aperture Synthesis Imaging Radar (ASIR) is one of the technologies adopted by the EISCAT_3D project to endow it with imaging capabilities in 3-dimensions that includes sub-beam resolution. Complemented by pulse compression, it will provide 3-dimensional images of certain types of incoherent scatter radar targets resolved to about 100 metres at 100 km range, depending on the signal-to-noise ratio. This ability will open new research opportunities to map small structures associated with non-homogeneous, unstable processes such as aurora, summer and winter polar radar echoes (PMSE and PMWE), Natural Enhanced Ion Acoustic Lines (NEIALs), structures excited by HF ionospheric heating, meteors, space debris, and others. The underlying physico-mathematical principles of the technique are the same as the technique employed in radioastronomy to image stellar objects; both require sophisticated inversion techniques to obtain reliable images.

  8. Fully Automatic 3D Reconstruction of Histological Images

    CERN Document Server

    Bagci, Ulas

    2009-01-01

    In this paper, we propose a computational framework for 3D volume reconstruction from 2D histological slices using registration algorithms in feature space. To improve the quality of reconstructed 3D volume, first, intensity variations in images are corrected by an intensity standardization process which maps image intensity scale to a standard scale where similar intensities correspond to similar tissues. Second, a subvolume approach is proposed for 3D reconstruction by dividing standardized slices into groups. Third, in order to improve the quality of the reconstruction process, an automatic best reference slice selection algorithm is developed based on an iterative assessment of image entropy and mean square error of the registration process. Finally, we demonstrate that the choice of the reference slice has a significant impact on registration quality and subsequent 3D reconstruction.

  9. Lossless Compression of Medical Images Using 3D Predictors.

    Science.gov (United States)

    Lucas, Luis; Rodrigues, Nuno; Cruz, Luis; Faria, Sergio

    2017-06-09

    This paper describes a highly efficient method for lossless compression of volumetric sets of medical images, such as CTs or MRIs. The proposed method, referred to as 3D-MRP, is based on the principle of minimum rate predictors (MRP), which is one of the state-of-the-art lossless compression technologies, presented in the data compression literature. The main features of the proposed method include the use of 3D predictors, 3D-block octree partitioning and classification, volume-based optimisation and support for 16 bit-depth images. Experimental results demonstrate the efficiency of the 3D-MRP algorithm for the compression of volumetric sets of medical images, achieving gains above 15% and 12% for 8 bit and 16 bit-depth contents, respectively, when compared to JPEG-LS, JPEG2000, CALIC, HEVC, as well as other proposals based on MRP algorithm.

  10. DCT and DST Based Image Compression for 3D Reconstruction

    Science.gov (United States)

    Siddeq, Mohammed M.; Rodrigues, Marcos A.

    2017-03-01

    This paper introduces a new method for 2D image compression whose quality is demonstrated through accurate 3D reconstruction using structured light techniques and 3D reconstruction from multiple viewpoints. The method is based on two discrete transforms: (1) A one-dimensional Discrete Cosine Transform (DCT) is applied to each row of the image. (2) The output from the previous step is transformed again by a one-dimensional Discrete Sine Transform (DST), which is applied to each column of data generating new sets of high-frequency components followed by quantization of the higher frequencies. The output is then divided into two parts where the low-frequency components are compressed by arithmetic coding and the high frequency ones by an efficient minimization encoding algorithm. At decompression stage, a binary search algorithm is used to recover the original high frequency components. The technique is demonstrated by compressing 2D images up to 99% compression ratio. The decompressed images, which include images with structured light patterns for 3D reconstruction and from multiple viewpoints, are of high perceptual quality yielding accurate 3D reconstruction. Perceptual assessment and objective quality of compression are compared with JPEG and JPEG2000 through 2D and 3D RMSE. Results show that the proposed compression method is superior to both JPEG and JPEG2000 concerning 3D reconstruction, and with equivalent perceptual quality to JPEG2000.

  11. New microangiography system development providing improved small vessel imaging, increased contrast to noise ratios, and multi-view 3D reconstructions.

    Science.gov (United States)

    Kuhls, Andrew T; Patel, Vikas; Ionita, Ciprian; Noël, Peter B; Walczak, Alan M; Rangwala, Hussain S; Hoffmann, Kenneth R; Rudin, Stephen

    2006-01-01

    A new microangiographic system (MA) integrated into a c-arm gantry has been developed allowing precise placement of a MA at the exact same angle as the standard x-ray image intensifier (II) with unchanged source and object position. The MA can also be arbitrarily moved about the object and easily moved into the field of view (FOV) in front of the lower resolution II when higher resolution angiographic sequences are needed. The benefits of this new system are illustrated in a neurovascular study, where a rabbit is injected with contrast media for varying oblique angles. Digital subtraction angiographic (DSA) images were obtained and compared using both the MA and II detectors for the same projection view. Vessels imaged with the MA appear sharper with smaller vessels visualized. Visualization of ~100 μm vessels was possible with the MA whereas not with the II. Further, the MA could better resolve vessel overlap. Contrast to noise ratios (CNR) were calculated for vessels of varying sizes for the MA versus the II and were found to be similar for large vessels, approximately double for medium vessels, and infinitely better for the smallest vessels. In addition, a 3D reconstruction of selected vessel segments was performed, using multiple (three) projections at oblique angles, for each detector. This new MA/II integrated system should lead to improved diagnosis and image guidance of neurovascular interventions by enabling initial guidance with the low resolution large FOV II combined with use of the high resolution MA during critical parts of diagnostic and interventional procedures.

  12. Experiments on terahertz 3D scanning microscopic imaging

    Science.gov (United States)

    Zhou, Yi; Li, Qi

    2016-10-01

    Compared with the visible light and infrared, terahertz (THz) radiation can penetrate nonpolar and nonmetallic materials. There are many studies on the THz coaxial transmission confocal microscopy currently. But few researches on the THz dual-axis reflective confocal microscopy were reported. In this paper, we utilized a dual-axis reflective confocal scanning microscope working at 2.52 THz. In contrast with the THz coaxial transmission confocal microscope, the microscope adopted in this paper can attain higher axial resolution at the expense of reduced lateral resolution, revealing more satisfying 3D imaging capability. Objects such as Chinese characters "Zhong-Hua" written in paper with a pencil and a combined sheet metal which has three layers were scanned. The experimental results indicate that the system can extract two Chinese characters "Zhong," "Hua" or three layers of the combined sheet metal. It can be predicted that the microscope can be applied to biology, medicine and other fields in the future due to its favorable 3D imaging capability.

  13. Research of Fast 3D Imaging Based on Multiple Mode

    Science.gov (United States)

    Chen, Shibing; Yan, Huimin; Ni, Xuxiang; Zhang, Xiuda; Wang, Yu

    2016-02-01

    Three-dimensional (3D) imaging has received increasingly extensive attention and has been widely used currently. Lots of efforts have been put on three-dimensional imaging method and system study, in order to meet fast and high accurate requirement. In this article, we realize a fast and high quality stereo matching algorithm on field programmable gate array (FPGA) using the combination of time-of-flight (TOF) camera and binocular camera. Images captured from the two cameras own a same spatial resolution, letting us use the depth maps taken by the TOF camera to figure initial disparity. Under the constraint of the depth map as the stereo pairs when comes to stereo matching, expected disparity of each pixel is limited within a narrow search range. In the meanwhile, using field programmable gate array (FPGA, altera cyclone IV series) concurrent computing we can configure multi core image matching system, thus doing stereo matching on embedded system. The simulation results demonstrate that it can speed up the process of stereo matching and increase matching reliability and stability, realize embedded calculation, expand application range.

  14. View Synthesis for Advanced 3D Video Systems

    Directory of Open Access Journals (Sweden)

    2009-02-01

    Full Text Available Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, high-quality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereo- and multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.

  15. View Synthesis for Advanced 3D Video Systems

    Directory of Open Access Journals (Sweden)

    Müller Karsten

    2008-01-01

    Full Text Available Abstract Interest in 3D video applications and systems is growing rapidly and technology is maturating. It is expected that multiview autostereoscopic displays will play an important role in home user environments, since they support multiuser 3D sensation and motion parallax impression. The tremendous data rate cannot be handled efficiently by representation and coding formats such as MVC or MPEG-C Part 3. Multiview video plus depth (MVD is a new format that efficiently supports such advanced 3DV systems, but this requires high-quality intermediate view synthesis. For this, a new approach is presented that separates unreliable image regions along depth discontinuities from reliable image regions, which are treated separately and fused to the final interpolated view. In contrast to previous layered approaches, our algorithm uses two boundary layers and one reliable layer, performs image-based 3D warping only, and was generically implemented, that is, does not necessarily rely on 3D graphics support. Furthermore, different hole-filling and filtering methods are added to provide high-quality intermediate views. As a result, high-quality intermediate views for an existing 9-view auto-stereoscopic display as well as other stereo- and multiscopic displays are presented, which prove the suitability of our approach for advanced 3DV systems.

  16. Real-time 3D display system based on computer-generated integral imaging technique using enhanced ISPP for hexagonal lens array.

    Science.gov (United States)

    Kim, Do-Hyeong; Erdenebat, Munkh-Uchral; Kwon, Ki-Chul; Jeong, Ji-Seong; Lee, Jae-Won; Kim, Kyung-Ah; Kim, Nam; Yoo, Kwan-Hee

    2013-12-01

    This paper proposes an open computer language (OpenCL) parallel processing method to generate the elemental image arrays (EIAs) for hexagonal lens array from a three-dimensional (3D) object such as a volume data. Hexagonal lens array has a higher fill factor compared to the rectangular lens array case; however, each pixel of an elemental image should be determined to belong to the single hexagonal lens. Therefore, generation for the entire EIA requires very large computations. The proposed method reduces processing time for the EIAs for a given hexagonal lens array. By using the proposed image space parallel processing (ISPP) method, it can enhance the processing speed that generates the 3D display of real-time interactive integral imaging for hexagonal lens array. In our experiment, we implemented the EIAs for hexagonal lens array in real-time and obtained a good processing time for a large of volume data for multiple cases of lens arrays.

  17. 3D Images of Materials Structures Processing and Analysis

    CERN Document Server

    Ohser, Joachim

    2009-01-01

    Taking and analyzing images of materials' microstructures is essential for quality control, choice and design of all kind of products. Today, the standard method still is to analyze 2D microscopy images. But, insight into the 3D geometry of the microstructure of materials and measuring its characteristics become more and more prerequisites in order to choose and design advanced materials according to desired product properties. This first book on processing and analysis of 3D images of materials structures describes how to develop and apply efficient and versatile tools for geometric analysis

  18. Visualization and Analysis of 3D Microscopic Images

    Science.gov (United States)

    Long, Fuhui; Zhou, Jianlong; Peng, Hanchuan

    2012-01-01

    In a wide range of biological studies, it is highly desirable to visualize and analyze three-dimensional (3D) microscopic images. In this primer, we first introduce several major methods for visualizing typical 3D images and related multi-scale, multi-time-point, multi-color data sets. Then, we discuss three key categories of image analysis tasks, namely segmentation, registration, and annotation. We demonstrate how to pipeline these visualization and analysis modules using examples of profiling the single-cell gene-expression of C. elegans and constructing a map of stereotyped neurite tracts in a fruit fly brain. PMID:22719236

  19. 3D Image Reconstruction: Determination of Pattern Orientation

    Energy Technology Data Exchange (ETDEWEB)

    Blankenbecler, Richard

    2003-03-13

    The problem of determining the euler angles of a randomly oriented 3-D object from its 2-D Fraunhofer diffraction patterns is discussed. This problem arises in the reconstruction of a positive semi-definite 3-D object using oversampling techniques. In such a problem, the data consists of a measured set of magnitudes from 2-D tomographic images of the object at several unknown orientations. After the orientation angles are determined, the object itself can then be reconstructed by a variety of methods using oversampling, the magnitude data from the 2-D images, physical constraints on the image and then iteration to determine the phases.

  20. A Texture Analysis of 3D Radar Images

    NARCIS (Netherlands)

    Deiana, D.; Yarovoy, A.

    2009-01-01

    In this paper a texture feature coding method to be applied to high-resolution 3D radar images in order to improve target detection is developed. An automatic method for image segmentation based on texture features is proposed. The method has been able to automatically detect weak targets which fail

  1. Surface Explorations: 3D Moving Images as Cartographies of Time.

    NARCIS (Netherlands)

    Verhoeff, N.

    2016-01-01

    Moving images of travel and exploration have a long history. In this essay I will examine how the trope of navigation in 3D moving images can work towards an intimate and haptic encounter with other times and other places – elsewhen and elsewhere. The particular navigational construction of space in

  2. 3D imaging and wavefront sensing with a plenoptic objective

    Science.gov (United States)

    Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.

    2011-06-01

    Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.

  3. 360 degree realistic 3D image display and image processing from real objects

    Science.gov (United States)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-12-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  4. 360 degree realistic 3D image display and image processing from real objects

    Science.gov (United States)

    Luo, Xin; Chen, Yue; Huang, Yong; Tan, Xiaodi; Horimai, Hideyoshi

    2016-09-01

    A 360-degree realistic 3D image display system based on direct light scanning method, so-called Holo-Table has been introduced in this paper. High-density directional continuous 3D motion images can be displayed easily with only one spatial light modulator. Using the holographic screen as the beam deflector, 360-degree full horizontal viewing angle was achieved. As an accompany part of the system, CMOS camera based image acquisition platform was built to feed the display engine, which can take a full 360-degree continuous imaging of the sample at the center. Customized image processing techniques such as scaling, rotation, format transformation were also developed and embedded into the system control software platform. In the end several samples were imaged to demonstrate the capability of our system.

  5. Real-time computer-generated integral imaging and 3D image calibration for augmented reality surgical navigation.

    Science.gov (United States)

    Wang, Junchen; Suenaga, Hideyuki; Liao, Hongen; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro

    2015-03-01

    Autostereoscopic 3D image overlay for augmented reality (AR) based surgical navigation has been studied and reported many times. For the purpose of surgical overlay, the 3D image is expected to have the same geometric shape as the original organ, and can be transformed to a specified location for image overlay. However, how to generate a 3D image with high geometric fidelity and quantitative evaluation of 3D image's geometric accuracy have not been addressed. This paper proposes a graphics processing unit (GPU) based computer-generated integral imaging pipeline for real-time autostereoscopic 3D display, and an automatic closed-loop 3D image calibration paradigm for displaying undistorted 3D images. Based on the proposed methods, a novel AR device for 3D image surgical overlay is presented, which mainly consists of a 3D display, an AR window, a stereo camera for 3D measurement, and a workstation for information processing. The evaluation on the 3D image rendering performance with 2560×1600 elemental image resolution shows the rendering speeds of 50-60 frames per second (fps) for surface models, and 5-8 fps for large medical volumes. The evaluation of the undistorted 3D image after the calibration yields sub-millimeter geometric accuracy. A phantom experiment simulating oral and maxillofacial surgery was also performed to evaluate the proposed AR overlay device in terms of the image registration accuracy, 3D image overlay accuracy, and the visual effects of the overlay. The experimental results show satisfactory image registration and image overlay accuracy, and confirm the system usability.

  6. 2D/3D Image Registration using Regression Learning.

    Science.gov (United States)

    Chou, Chen-Rui; Frederick, Brandon; Mageras, Gig; Chang, Sha; Pizer, Stephen

    2013-09-01

    In computer vision and image analysis, image registration between 2D projections and a 3D image that achieves high accuracy and near real-time computation is challenging. In this paper, we propose a novel method that can rapidly detect an object's 3D rigid motion or deformation from a 2D projection image or a small set thereof. The method is called CLARET (Correction via Limited-Angle Residues in External Beam Therapy) and consists of two stages: registration preceded by shape space and regression learning. In the registration stage, linear operators are used to iteratively estimate the motion/deformation parameters based on the current intensity residue between the target projec-tion(s) and the digitally reconstructed radiograph(s) (DRRs) of the estimated 3D image. The method determines the linear operators via a two-step learning process. First, it builds a low-order parametric model of the image region's motion/deformation shape space from its prior 3D images. Second, using learning-time samples produced from the 3D images, it formulates the relationships between the model parameters and the co-varying 2D projection intensity residues by multi-scale linear regressions. The calculated multi-scale regression matrices yield the coarse-to-fine linear operators used in estimating the model parameters from the 2D projection intensity residues in the registration. The method's application to Image-guided Radiation Therapy (IGRT) requires only a few seconds and yields good results in localizing a tumor under rigid motion in the head and neck and under respiratory deformation in the lung, using one treatment-time imaging 2D projection or a small set thereof.

  7. High-Performance 3D Image Processing Architectures for Image-Guided Interventions

    Science.gov (United States)

    2008-01-01

    Circuits and Systems, vol. 1 (2), 2007, pp. 116-127. iv • O. Dandekar, C. Castro- Pareja , and R. Shekhar, “FPGA-based real-time 3D image...How low can we go?,” presented at IEEE International Symposium on Biomedical Imaging, 2006, pp. 502-505. • C. R. Castro- Pareja , O. Dandekar, and R...Venugopal, C. R. Castro- Pareja , and O. Dandekar, “An FPGA-based 3D image processor with median and convolution filters for real-time applications,” in

  8. Medical image segmentation using 3D MRI data

    Science.gov (United States)

    Voronin, V.; Marchuk, V.; Semenishchev, E.; Cen, Yigang; Agaian, S.

    2017-05-01

    Precise segmentation of three-dimensional (3D) magnetic resonance imaging (MRI) image can be a very useful computer aided diagnosis (CAD) tool in clinical routines. Accurate automatic extraction a 3D component from images obtained by magnetic resonance imaging (MRI) is a challenging segmentation problem due to the small size objects of interest (e.g., blood vessels, bones) in each 2D MRA slice and complex surrounding anatomical structures. Our objective is to develop a specific segmentation scheme for accurately extracting parts of bones from MRI images. In this paper, we use a segmentation algorithm to extract the parts of bones from Magnetic Resonance Imaging (MRI) data sets based on modified active contour method. As a result, the proposed method demonstrates good accuracy in a comparison between the existing segmentation approaches on real MRI data.

  9. Imaging and 3D morphological analysis of collagen fibrils.

    Science.gov (United States)

    Altendorf, H; Decencière, E; Jeulin, D; De sa Peixoto, P; Deniset-Besseau, A; Angelini, E; Mosser, G; Schanne-Klein, M-C

    2012-08-01

    The recent booming of multiphoton imaging of collagen fibrils by means of second harmonic generation microscopy generates the need for the development and automation of quantitative methods for image analysis. Standard approaches sequentially analyse two-dimensional (2D) slices to gain knowledge on the spatial arrangement and dimension of the fibrils, whereas the reconstructed three-dimensional (3D) image yields better information about these characteristics. In this work, a 3D analysis method is proposed for second harmonic generation images of collagen fibrils, based on a recently developed 3D fibre quantification method. This analysis uses operators from mathematical morphology. The fibril structure is scanned with a directional distance transform. Inertia moments of the directional distances yield the main fibre orientation, corresponding to the main inertia axis. The collaboration of directional distances and fibre orientation delivers a geometrical estimate of the fibre radius. The results include local maps as well as global distribution of orientation and radius of the fibrils over the 3D image. They also bring a segmentation of the image into foreground and background, as well as a classification of the foreground pixels into the preferred orientations. This accurate determination of the spatial arrangement of the fibrils within a 3D data set will be most relevant in biomedical applications. It brings the possibility to monitor remodelling of collagen tissues upon a variety of injuries and to guide tissues engineering because biomimetic 3D organizations and density are requested for better integration of implants. © 2012 The Authors Journal of Microscopy © 2012 Royal Microscopical Society.

  10. Tipping solutions: emerging 3D nano-fabrication/ -imaging technologies

    OpenAIRE

    Seniutinas Gediminas; Balčytis Armandas; Reklaitis Ignas; Chen Feng; Davis Jeffrey; David Christian; Juodkazis Saulius

    2017-01-01

    The evolution of optical microscopy from an imaging technique into a tool for materials modification and fabrication is now being repeated with other characterization techniques, including scanning electron microscopy (SEM), focused ion beam (FIB) milling/imaging, and atomic force microscopy (AFM). Fabrication and in situ imaging of materials undergoing a three-dimensional (3D) nano-structuring within a 1−100 nm resolution window is required for future manufacturing of devices. This level of ...

  11. Military efforts in nanosensors, 3D printing, and imaging detection

    Science.gov (United States)

    Edwards, Eugene; Booth, Janice C.; Roberts, J. Keith; Brantley, Christina L.; Crutcher, Sihon H.; Whitley, Michael; Kranz, Michael; Seif, Mohamed; Ruffin, Paul

    2017-04-01

    A team of researchers and support organizations, affiliated with the Army Aviation and Missile Research, Development, and Engineering Center (AMRDEC), has initiated multidiscipline efforts to develop nano-based structures and components for advanced weaponry, aviation, and autonomous air/ground systems applications. The main objective of this research is to exploit unique phenomena for the development of novel technology to enhance warfighter capabilities and produce precision weaponry. The key technology areas that the authors are exploring include nano-based sensors, analysis of 3D printing constituents, and nano-based components for imaging detection. By integrating nano-based devices, structures, and materials into weaponry, the Army can revolutionize existing (and future) weaponry systems by significantly reducing the size, weight, and cost. The major research thrust areas include the development of carbon nanotube sensors to detect rocket motor off-gassing; the application of current methodologies to assess materials used for 3D printing; and the assessment of components to improve imaging seekers. The status of current activities, associated with these key areas and their implementation into AMRDEC's research, is outlined in this paper. Section #2 outlines output data, graphs, and overall evaluations of carbon nanotube sensors placed on a 16 element chip and exposed to various environmental conditions. Section #3 summarizes the experimental results of testing various materials and resulting components that are supplementary to additive manufacturing/fused deposition modeling (FDM). Section #4 recapitulates a preliminary assessment of the optical and electromechanical components of seekers in an effort to propose components and materials that can work more effectively.

  12. Needle placement for piriformis injection using 3-D imaging.

    Science.gov (United States)

    Clendenen, Steven R; Candler, Shawn A; Osborne, Michael D; Palmer, Scott C; Duench, Stephanie; Glynn, Laura; Ghazi, Salim M

    2013-01-01

    Piriformis syndrome is a pain syndrome originating in the buttock and is attributed to 6% - 8% of patients referred for the treatment of back and leg pain. The treatment for piriformis syndrome using fluoroscopy, computed tomography (CT), electromyography (EMG), and ultrasound (US) has become standard practice. The treatment of Piriformis Syndrome has evolved to include fluoroscopy and EMG with CT guidance. We present a case study of 5 successful piriformis injections using 3-D computer-assisted electromagnet needle tracking coupled with ultrasound. A 6-degree of freedom electromagnetic position tracker was attached to the ultrasound probe that allowed the system to detect the position and orientation of the probe in the magnetic field. The tracked ultrasound probe was used to find the posterior superior iliac spine. Subsequently, 3 points were captured to register the ultrasound image with the CT or magnetic resonance image scan. Moreover, after the registration was obtained, the navigation system visualized the tracked needle relative to the CT scan in real-time using 2 orthogonal multi-planar reconstructions centered at the tracked needle tip. Conversely, a recent study revealed that fluoroscopically guided injections had 30% accuracy compared to ultrasound guided injections, which tripled the accuracy percentage. This novel technique exhibited an accurate needle guidance injection precision of 98% while advancing to the piriformis muscle and avoiding the sciatic nerve. The mean (± SD) procedure time was 19.08 (± 4.9) minutes. This technique allows for electromagnetic instrument tip tracking with real-time 3-D guidance to the selected target. As with any new technique, a learning curve is expected; however, this technique could offer an alternative, minimizing radiation exposure.

  13. High Frame Rate Synthetic Aperture 3D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Villagómez Hoyos, Carlos Armando; Holbek, Simon; Stuart, Matthias Bo

    2016-01-01

    3-D blood flow quantification with high spatial and temporal resolution would strongly benefit clinical research on cardiovascular pathologies. Ultrasonic velocity techniques are known for their ability to measure blood flow with high precision at high spatial and temporal resolution. However......, current volumetric ultrasonic flow methods are limited to one velocity component or restricted to a reduced field of view (FOV), e.g. fixed imaging planes, in exchange for higher temporal resolutions. To solve these problems, a previously proposed accurate 2-D high frame rate vector flow imaging (VFI......) technique is extended to estimate the 3-D velocity components inside a volume at high temporal resolutions (

  14. Terahertz Quantum Cascade Laser Based 3D Imaging Project

    Data.gov (United States)

    National Aeronautics and Space Administration — LongWave Photonics proposes a terahertz quantum-cascade laser based swept-source optical coherence tomography (THz SS-OCT) system for single-sided, 3D,...

  15. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    Directory of Open Access Journals (Sweden)

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  16. PHOTOGRAMMETRIC 3D BUILDING RECONSTRUCTION FROM THERMAL IMAGES

    Directory of Open Access Journals (Sweden)

    E. Maset

    2017-08-01

    Full Text Available This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.

  17. Photogrammetric 3d Building Reconstruction from Thermal Images

    Science.gov (United States)

    Maset, E.; Fusiello, A.; Crosilla, F.; Toldo, R.; Zorzetto, D.

    2017-08-01

    This paper addresses the problem of 3D building reconstruction from thermal infrared (TIR) images. We show that a commercial Computer Vision software can be used to automatically orient sequences of TIR images taken from an Unmanned Aerial Vehicle (UAV) and to generate 3D point clouds, without requiring any GNSS/INS data about position and attitude of the images nor camera calibration parameters. Moreover, we propose a procedure based on Iterative Closest Point (ICP) algorithm to create a model that combines high resolution and geometric accuracy of RGB images with the thermal information deriving from TIR images. The process can be carried out entirely by the aforesaid software in a simple and efficient way.

  18. Design of 3D integrated circuits and systems

    CERN Document Server

    Sharma, Rohit

    2014-01-01

    Three-dimensional (3D) integration of microsystems and subsystems has become essential to the future of semiconductor technology development. 3D integration requires a greater understanding of several interconnected systems stacked over each other. While this vertical growth profoundly increases the system functionality, it also exponentially increases the design complexity. Design of 3D Integrated Circuits and Systems tackles all aspects of 3D integration, including 3D circuit and system design, new processes and simulation techniques, alternative communication schemes for 3D circuits and sys

  19. CASTLE3D - A Computer Aided System for Labelling Archaeological Excavations in 3D

    Science.gov (United States)

    Houshiar, H.; Borrmann, D.; Elseberg, J.; Nüchter, A.; Näth, F.; Winkler, S.

    2015-08-01

    Documentation of archaeological excavation sites with conventional methods and tools such as hand drawings, measuring tape and archaeological notes is time consuming. This process is prone to human errors and the quality of the documentation depends on the qualification of the archaeologist on site. Use of modern technology and methods in 3D surveying and 3D robotics facilitate and improve this process. Computer-aided systems and databases improve the documentation quality and increase the speed of data acquisition. 3D laser scanning is the state of the art in modelling archaeological excavation sites, historical sites and even entire cities or landscapes. Modern laser scanners are capable of data acquisition of up to 1 million points per second. This provides a very detailed 3D point cloud of the environment. 3D point clouds and 3D models of an excavation site provide a better representation of the environment for the archaeologist and for documentation. The point cloud can be used both for further studies on the excavation and for the presentation of results. This paper introduces a Computer aided system for labelling archaeological excavations in 3D (CASTLE3D). Consisting of a set of tools for recording and georeferencing the 3D data from an excavation site, CASTLE3D is a novel documentation approach in industrial archaeology. It provides a 2D and 3D visualisation of the data and an easy-to-use interface that enables the archaeologist to select regions of interest and to interact with the data in both representations. The 2D visualisation and a 3D orthogonal view of the data provide cuts of the environment that resemble the traditional hand drawings. The 3D perspective view gives a realistic view of the environment. CASTLE3D is designed as an easy-to-use on-site semantic mapping tool for archaeologists. Each project contains a predefined set of semantic information that can be used to label findings in the data. Multiple regions of interest can be joined under

  20. 3D- VISUALIZATION BY RAYTRACING IMAGE SYNTHESIS ON GPU

    Directory of Open Access Journals (Sweden)

    Al-Oraiqat Anas M.

    2016-06-01

    Full Text Available This paper presents a realization of the approach to spatial 3D stereo of visualization of 3D images with use parallel Graphics processing unit (GPU. The experiments of realization of synthesis of images of a 3D stage by a method of trace of beams on GPU with Compute Unified Device Architecture (CUDA have shown that 60 % of the time is spent for the decision of a computing problem approximately, the major part of time (40 % is spent for transfer of data between the central processing unit and GPU for calculations and the organization process of visualization. The study of the influence of increase in the size of the GPU network at the speed of calculations showed importance of the correct task of structure of formation of the parallel computer network and general mechanism of parallelization.

  1. Structured Light-Based 3D Reconstruction System for Plants.

    Science.gov (United States)

    Nguyen, Thuy Tuong; Slaughter, David C; Max, Nelson; Maloof, Julin N; Sinha, Neelima

    2015-07-29

    Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces) and software algorithms (including the proposed 3D point cloud registration and plant feature measurement). This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  2. Structured Light-Based 3D Reconstruction System for Plants

    Directory of Open Access Journals (Sweden)

    Thuy Tuong Nguyen

    2015-07-01

    Full Text Available Camera-based 3D reconstruction of physical objects is one of the most popular computer vision trends in recent years. Many systems have been built to model different real-world subjects, but there is lack of a completely robust system for plants. This paper presents a full 3D reconstruction system that incorporates both hardware structures (including the proposed structured light system to enhance textures on object surfaces and software algorithms (including the proposed 3D point cloud registration and plant feature measurement. This paper demonstrates the ability to produce 3D models of whole plants created from multiple pairs of stereo images taken at different viewing angles, without the need to destructively cut away any parts of a plant. The ability to accurately predict phenotyping features, such as the number of leaves, plant height, leaf size and internode distances, is also demonstrated. Experimental results show that, for plants having a range of leaf sizes and a distance between leaves appropriate for the hardware design, the algorithms successfully predict phenotyping features in the target crops, with a recall of 0.97 and a precision of 0.89 for leaf detection and less than a 13-mm error for plant size, leaf size and internode distance.

  3. Reducing the influence of direct reflection on return signal detection in a 3D imaging lidar system by rotating the polarizing beam splitter.

    Science.gov (United States)

    Wang, Chunhui; Lee, Xiaobao; Cui, Tianxiang; Qu, Yang; Li, Yunxi; Li, Hailong; Wang, Qi

    2016-03-01

    The direction rule of the laser beam traveling through a deflected polarizing beam splitter (PBS) cube is derived. It reveals that, due to the influence of end-face reflection of the PBS at the detector side, the emergent beam coming from the incident beam parallels the direction of the original case without rotation, with only a very small translation interval between them. The formula of the translation interval is also given. Meanwhile, the emergent beam from the return signal at the detector side deflects at an angle twice that of the PBS rotation angle. The correctness has been verified by an experiment. The intensity transmittance of the emergent beam when propagating in the PBS is changes very little if the rotation angle is less than 35 deg. In a 3D imaging lidar system, by rotating the PBS cube by an angle, the direction of the return signal optical axis is separated from that of the origin, which can decrease or eliminate the influence of direct reflection caused by the prism end face on target return signal detection. This has been checked by experiment.

  4. The Design and Implementation of 3D Medical Image Reconstruction System Based on VTK and ITK%基于VTK和ITK的3D医学图像重建系统的设计与实现

    Institute of Scientific and Technical Information of China (English)

    刘鹰; 韩利凯

    2011-01-01

    三维图像重构是当前数字图像处理领域的一个热点,特别是其在医学图像处理中的应用.VascuView3D是一个基于VTK和ITK的3D医学图像重建系统,该系统实现了体绘制(VR)、表面绘制(SR)和多平面绘制(MPR)等3D视图,以及基于CLUT的三维灰度图像着色.%3D image reconstruction is an attractive Held generally in digital image processing techniques, especially in medical imaging. The design and implementation of a 3D medical image reconstruction system VascuView, which can be used to build 3D images from 2D image slice files produced by CT and MRI devices, is introduced. The volume rendering, surface rendering and Multi -Planar rendering are implemented and lots of the 3D operations such as coloring of 3D image based on CLUT can be performed with this software.

  5. Robust 3D reconstruction system for human jaw modeling

    Science.gov (United States)

    Yamany, Sameh M.; Farag, Aly A.; Tazman, David; Farman, Allan G.

    1999-03-01

    This paper presents a model-based vision system for dentistry that will replace traditional approaches used in diagnosis, treatment planning and surgical simulation. Dentistry requires accurate 3D representation of the teeth and jaws for many diagnostic and treatment purposes. For example orthodontic treatment involves the application of force systems to teeth over time to correct malocclusion. In order to evaluate tooth movement progress, the orthodontists monitors this movement by means of visual inspection, intraoral measurements, fabrication of plastic models, photographs and radiographs, a process which is both costly and time consuming. In this paper an integrate system has been developed to record the patient's occlusion using computer vision. Data is acquired with an intraoral video camera. A modified shape from shading (SFS) technique, using perspective projection and camera calibration, is used to extract accurate 3D information from a sequence of 2D images of the jaw. A new technique for 3D data registration, using a Grid Closest Point transform and genetic algorithms, is used to register the SFS output. Triangulization is then performed, and a solid 3D model is obtained via a rapid prototype machine.

  6. A systematic review of image segmentation methodology, used in the additive manufacture of patient-specific 3D printed models of the cardiovascular system

    Directory of Open Access Journals (Sweden)

    N Byrne

    2016-04-01

    Full Text Available Background Shortcomings in existing methods of image segmentation preclude the widespread adoption of patient-specific 3D printing as a routine decision-making tool in the care of those with congenital heart disease. We sought to determine the range of cardiovascular segmentation methods and how long each of these methods takes. Methods A systematic review of literature was undertaken. Medical imaging modality, segmentation methods, segmentation time, segmentation descriptive quality (SDQ and segmentation software were recorded. Results Totally 136 studies met the inclusion criteria (1 clinical trial; 80 journal articles; 55 conference, technical and case reports. The most frequently used image segmentation methods were brightness thresholding, region growing and manual editing, as supported by the most popular piece of proprietary software: Mimics (Materialise NV, Leuven, Belgium, 1992–2015. The use of bespoke software developed by individual authors was not uncommon. SDQ indicated that reporting of image segmentation methods was generally poor with only one in three accounts providing sufficient detail for their procedure to be reproduced. Conclusions and implication of key findings Predominantly anecdotal and case reporting precluded rigorous assessment of risk of bias and strength of evidence. This review finds a reliance on manual and semi-automated segmentation methods which demand a high level of expertise and a significant time commitment on the part of the operator. In light of the findings, we have made recommendations regarding reporting of 3D printing studies. We anticipate that these findings will encourage the development of advanced image segmentation methods.

  7. A systematic review of image segmentation methodology, used in the additive manufacture of patient-specific 3D printed models of the cardiovascular system

    Directory of Open Access Journals (Sweden)

    N Byrne

    2016-04-01

    Full Text Available Background Shortcomings in existing methods of image segmentation preclude the widespread adoption of patient-specific 3D printing as a routine decision-making tool in the care of those with congenital heart disease. We sought to determine the range of cardiovascular segmentation methods and how long each of these methods takes. Methods A systematic review of literature was undertaken. Medical imaging modality, segmentation methods, segmentation time, segmentation descriptive quality (SDQ and segmentation software were recorded. Results Totally 136 studies met the inclusion criteria (1 clinical trial; 80 journal articles; 55 conference, technical and case reports. The most frequently used image segmentation methods were brightness thresholding, region growing and manual editing, as supported by the most popular piece of proprietary software: Mimics (Materialise NV, Leuven, Belgium, 1992–2015. The use of bespoke software developed by individual authors was not uncommon. SDQ indicated that reporting of image segmentation methods was generally poor with only one in three accounts providing sufficient detail for their procedure to be reproduced. Conclusions and implication of key findings Predominantly anecdotal and case reporting precluded rigorous assessment of risk of bias and strength of evidence. This review finds a reliance on manual and semi-automated segmentation methods which demand a high level of expertise and a significant time commitment on the part of the operator. In light of the findings, we have made recommendations regarding reporting of 3D printing studies. We anticipate that these findings will encourage the development of advanced image segmentation methods.

  8. High Speed Laser 3D Measurement System

    Institute of Scientific and Technical Information of China (English)

    SONG Yuan-he; FAN Chang-zhou; GUO Ying; LI Hong-wei; ZHAO Hong

    2003-01-01

    Using the method of line structure light produced by a laser diode,three dimensional profile measurement is deeply researched.A hardware circuit developed is used to get the center position of light section for the improvement of the measurement speed.A double CCD compensation technology is used to improve the measurement precision. An easy and effective calibration method of the least squares to fit the parameter of system structure is used to get the relative coordinate relationship of objects and images of light section in the directions of height and axis. Sensor scanning segment by segment and layer by layer makes the measurement range expand greatly.

  9. Preliminary comparison of 3D synthetic aperture imaging with Explososcan

    Science.gov (United States)

    Rasmussen, Morten Fischer; Hansen, Jens Munk; Férin, Guillaume; Dufait, Rémi; Jensen, Jørgen Arendt

    2012-03-01

    Explososcan is the 'gold standard' for real-time 3D medical ultrasound imaging. In this paper, 3D synthetic aperture imaging is compared to Explososcan by simulation of 3D point spread functions. The simulations mimic a 32×32 element prototype transducer. The transducer mimicked is a dense matrix phased array with a pitch of 300 μm, made by Vermon. For both imaging techniques, 289 emissions are used to image a volume spanning 60° in both the azimuth and elevation direction and 150mm in depth. This results for both techniques in a frame rate of 18 Hz. The implemented synthetic aperture technique reduces the number of transmit channels from 1024 to 256, compared to Explososcan. In terms of FWHM performance, was Explososcan and synthetic aperture found to perform similar. At 90mm depth is Explososcan's FWHM performance 7% better than that of synthetic aperture. Synthetic aperture improved the cystic resolution, which expresses the ability to detect anechoic cysts in a uniform scattering media, at all depths except at Explososcan's focus point. Synthetic aperture reduced the cyst radius, R20dB, at 90mm depth by 48%. Synthetic aperture imaging was shown to reduce the number of transmit channels by four and still, generally, improve the imaging quality.

  10. Refraction Correction in 3D Transcranial Ultrasound Imaging

    Science.gov (United States)

    Lindsey, Brooks D.; Smith, Stephen W.

    2014-01-01

    We present the first correction of refraction in three-dimensional (3D) ultrasound imaging using an iterative approach that traces propagation paths through a two-layer planar tissue model, applying Snell’s law in 3D. This approach is applied to real-time 3D transcranial ultrasound imaging by precomputing delays offline for several skull thicknesses, allowing the user to switch between three sets of delays for phased array imaging at the push of a button. Simulations indicate that refraction correction may be expected to increase sensitivity, reduce beam steering errors, and partially restore lost spatial resolution, with the greatest improvements occurring at the largest steering angles. Distorted images of cylindrical lesions were created by imaging through an acrylic plate in a tissue-mimicking phantom. As a result of correcting for refraction, lesions were restored to 93.6% of their original diameter in the lateral direction and 98.1% of their original shape along the long axis of the cylinders. In imaging two healthy volunteers, the mean brightness increased by 8.3% and showed no spatial dependency. PMID:24275538

  11. Projective 3D-reconstruction of Uncalibrated Endoscopic Images

    Directory of Open Access Journals (Sweden)

    P. Faltin

    2010-01-01

    Full Text Available The most common medical diagnostic method for urinary bladder cancer is cystoscopy. This inspection of the bladder is performed by a rigid endoscope, which is usually guided close to the bladder wall. This causes a very limited field of view; difficulty of navigation is aggravated by the usage of angled endoscopes. These factors cause difficulties in orientation and visual control. To overcome this problem, the paper presents a method for extracting 3D information from uncalibrated endoscopic image sequences and for reconstructing the scene content. The method uses the SURF-algorithm to extract features from the images and relates the images by advanced matching. To stabilize the matching, the epipolar geometry is extracted for each image pair using a modified RANSAC-algorithm. Afterwards these matched point pairs are used to generate point triplets over three images and to describe the trifocal geometry. The 3D scene points are determined by applying triangulation to the matched image points. Thus, these points are used to generate a projective 3D reconstruction of the scene, and provide the first step for further metric reconstructions.

  12. Subjective evaluation of a 3D videoconferencing system

    Science.gov (United States)

    Rizek, Hadi; Brunnström, Kjell; Wang, Kun; Andrén, Börje; Johanson, Mathias

    2014-03-01

    A shortcoming of traditional videoconferencing systems is that they present the user with a flat, two-dimensional image of the remote participants. Recent advances in autostereoscopic display technology now make it possible to develop video conferencing systems supporting true binocular depth perception. In this paper, we present a subjective evaluation of a prototype multiview autostereoscopic video conferencing system and suggest a number of possible improvements based on the results. Whereas methods for subjective evaluation of traditional 2D videoconferencing systems are well established, the introduction of 3D requires an extension of the test procedures to assess the quality of depth perception. For this purpose, two depth-based test tasks have been designed and experiments have been conducted with test subjects comparing the 3D system to a conventional 2D video conferencing system. The outcome of the experiments show that the perception of depth is significantly improved in the 3D system, but the overall quality of experience is higher in the 2D system.

  13. Extracting 3D Layout From a Single Image Using Global Image Structures

    NARCIS (Netherlands)

    Lou, Z.; Gevers, T.; Hu, N.

    2015-01-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very

  14. Extracting 3D Layout From a Single Image Using Global Image Structures

    NARCIS (Netherlands)

    Lou, Z.; Gevers, T.; Hu, N.

    2015-01-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very b

  15. 3D Image Fusion to Localise Intercostal Arteries During TEVAR.

    Science.gov (United States)

    Koutouzi, G; Sandström, C; Skoog, P; Roos, H; Falkenberg, M

    2017-01-01

    Preservation of intercostal arteries during thoracic aortic procedures reduces the risk of post-operative paraparesis. The origins of the intercostal arteries are visible on pre-operative computed tomography angiography (CTA), but rarely on intra-operative angiography. The purpose of this report is to suggest an image fusion technique for intra-operative localisation of the intercostal arteries during thoracic endovascular repair (TEVAR). The ostia of the intercostal arteries are identified and manually marked with rings on the pre-operative CTA. The optimal distal landing site in the descending aorta is determined and marked, allowing enough length for an adequate seal and attachment without covering more intercostal arteries than necessary. After 3D/3D fusion of the pre-operative CTA with an intra-operative cone-beam CT (CBCT), the markings are overlaid on the live fluoroscopy screen for guidance. The accuracy of the overlay is confirmed with digital subtraction angiography (DSA) and the overlay is adjusted when needed. Stent graft deployment is guided by the markings. The initial experience of this technique in seven patients is presented. 3D image fusion was feasible in all cases. Follow-up CTA after 1 month revealed that all intercostal arteries planned for preservation, were patent. None of the patients developed signs of spinal cord ischaemia. 3D image fusion can be used to localise the intercostal arteries during TEVAR. This may preserve some intercostal arteries and reduce the risk of post-operative spinal cord ischaemia.

  16. Geodetic imaging of potential seismogenic asperities on the Xianshuihe-Anninghe-Zemuhe fault system, southwest China, with a new 3-D viscoelastic interseismic coupling model

    Science.gov (United States)

    Jiang, Guoyan; Xu, Xiwei; Chen, Guihua; Liu, Yajing; Fukahata, Yukitoshi; Wang, Hua; Yu, Guihua; Tan, Xibin; Xu, Caijun

    2015-03-01

    We use GPS and interferometric synthetic aperture radar (InSAR) measurements to image the spatial variation of interseismic coupling on the Xianshuihe-Anninghe-Zemuhe (XAZ) fault system. A new 3-D viscoelastic interseismic deformation model is developed to infer the rotation and strain rates of blocks, postseismic viscoelastic relaxation, and interseismic slip deficit on the fault surface discretized with triangular dislocation patches. The inversions of synthetic data show that the optimal weight ratio and smoothing factor are both 1. The successive joint inversions of geodetic data with different viscosities reveal six potential fully coupled asperities on the XAZ fault system. Among them, the potential asperity between Shimian and Mianning, which does not exist in the case of 1019 Pa s, is confirmed by the published microearthquake depth profile. Besides, there is another potential partially coupled asperity between Daofu and Kangding with a length scale up to 140 km. All these asperity sizes are larger than the minimum resolvable wavelength. The minimum and maximum slip deficit rates near the Moxi town are 7.0 and 12.7 mm/yr, respectively. Different viscosities have little influence on the roughness of the slip deficit rate distribution and the fitting residuals, which probably suggests that our observations cannot provide a good constraint on the viscosity of the middle lower crust. The calculation of seismic moment accumulation on each segment indicates that the Songlinkou-Selaha (S4), Shimian-Mianning (S7), and Mianning-Xichang (S8) segments are very close to the rupture of characteristic earthquakes. However, the confidence level is confined by sparse near-fault observations.

  17. Concept of Indoor 3D-Route UAV Scheduling System

    DEFF Research Database (Denmark)

    Khosiawan, Yohanes; Nielsen, Izabela Ewa; Do, Ngoc Ang Dung

    2016-01-01

    The objective of the proposed concept is to develop a methodology to support Unmanned Aerial Vehicles (UAVs) operation with a path planning and scheduling system in 3D environments. The proposed 3D path-planning and scheduling allows the system to schedule UAVs routing to perform tasks in 3D indoor...

  18. Effective 3-D surface modeling for geographic information systems

    Directory of Open Access Journals (Sweden)

    K. Yüksek

    2013-11-01

    Full Text Available In this work, we propose a dynamic, flexible and interactive urban digital terrain platform (DTP with spatial data and query processing capabilities of Geographic Information Systems (GIS, multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized Directional Replacement Policy (DRP based buffer management scheme. Polyhedron structures are used in Digital Surface Modeling (DSM and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g. X3-D and VRML and services which integrate multi-dimensional spatial information and satellite/aerial imagery.

  19. Effective 3-D surface modeling for geographic information systems

    Science.gov (United States)

    Yüksek, K.; Alparslan, M.; Mendi, E.

    2016-01-01

    In this work, we propose a dynamic, flexible and interactive urban digital terrain platform with spatial data and query processing capabilities of geographic information systems, multimedia database functionality and graphical modeling infrastructure. A new data element, called Geo-Node, which stores image, spatial data and 3-D CAD objects is developed using an efficient data structure. The system effectively handles data transfer of Geo-Nodes between main memory and secondary storage with an optimized directional replacement policy (DRP) based buffer management scheme. Polyhedron structures are used in digital surface modeling and smoothing process is performed by interpolation. The experimental results show that our framework achieves high performance and works effectively with urban scenes independent from the amount of spatial data and image size. The proposed platform may contribute to the development of various applications such as Web GIS systems based on 3-D graphics standards (e.g., X3-D and VRML) and services which integrate multi-dimensional spatial information and satellite/aerial imagery.

  20. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  1. Optical-CT imaging of complex 3D dose distributions

    Science.gov (United States)

    Oldham, Mark; Kim, Leonard; Hugo, Geoffrey

    2005-04-01

    The limitations of conventional dosimeters restrict the comprehensiveness of verification that can be performed for advanced radiation treatments presenting an immediate and substantial problem for clinics attempting to implement these techniques. In essence, the rapid advances in the technology of radiation delivery have not been paralleled by corresponding advances in the ability to verify these treatments. Optical-CT gel-dosimetry is a relatively new technique with potential to address this imbalance by providing high resolution 3D dose maps in polymer and radiochromic gel dosimeters. We have constructed a 1st generation optical-CT scanner capable of high resolution 3D dosimetry and applied it to a number of simple and increasingly complex dose distributions including intensity-modulated-radiation-therapy (IMRT). Prior to application to IMRT, the robustness of optical-CT gel dosimetry was investigated on geometry and variable attenuation phantoms. Physical techniques and image processing methods were developed to minimize deleterious effects of refraction, reflection, and scattered laser light. Here we present results of investigations into achieving accurate high-resolution 3D dosimetry with optical-CT, and show clinical examples of 3D IMRT dosimetry verification. In conclusion, optical-CT gel dosimetry can provide high resolution 3D dose maps that greatly facilitate comprehensive verification of complex 3D radiation treatments. Good agreement was observed at high dose levels (>50%) between planned and measured dose distributions. Some systematic discrepancies were observed however (rms discrepancy 3% at high dose levels) indicating further work is required to eliminate confounding factors presently compromising the accuracy of optical-CT 3D gel-dosimetry.

  2. 3D imaging from theory to practice: the Mona Lisa story

    Science.gov (United States)

    Blais, Francois; Cournoyer, Luc; Beraldin, J.-Angelo; Picard, Michel

    2008-08-01

    The warped poplar panel and the technique developed by Leonardo to paint the Mona Lisa present a unique research and engineering challenge for the design of a complete optical 3D imaging system. This paper discusses the solution developed to precisely measure in 3D the world's most famous painting despite its highly contrasted paint surface and reflective varnish. The discussion focuses on the opto-mechanical design and the complete portable 3D imaging system used for this unique occasion. The challenges associated with obtaining 3D color images at a resolution of 0.05 mm and a depth precision of 0.01 mm are illustrated by exploring the virtual 3D model of the Mona Lisa.

  3. Omnidirectional vision systems calibration, feature extraction and 3D information

    CERN Document Server

    Puig, Luis

    2013-01-01

    This work focuses on central catadioptric systems, from the early step of calibration to high-level tasks such as 3D information retrieval. The book opens with a thorough introduction to the sphere camera model, along with an analysis of the relation between this model and actual central catadioptric systems. Then, a new approach to calibrate any single-viewpoint catadioptric camera is described.  This is followed by an analysis of existing methods for calibrating central omnivision systems, and a detailed examination of hybrid two-view relations that combine images acquired with uncalibrated

  4. Vhrs Stereo Images for 3d Modelling of Buildings

    Science.gov (United States)

    Bujakiewicz, A.; Holc, M.

    2012-07-01

    The paper presents the project which was carried out in the Photogrammetric Laboratory of Warsaw University of Technology. The experiment is concerned with the extraction of 3D vector data for buildings creation from 3D photogrammetric model based on the Ikonos stereo images. The model was reconstructed with photogrammetric workstation - Summit Evolution combined with ArcGIS 3D platform. Accuracy of 3D model was significantly improved by use for orientation of pair of satellite images the stereo measured tie points distributed uniformly around the model area in addition to 5 control points. The RMS for model reconstructed on base of the RPC coefficients only were 16,6 m, 2,7 m and 47,4 m, for X, Y and Z coordinates, respectively. By addition of 5 control points the RMS were improved to 0,7 m, 0,7 m 1,0 m, where the best results were achieved when RMS were estimated from deviations in 17 check points (with 5 control points)and amounted to 0,4 m, 0,5 m and 0,6 m, for X, Y, and Z respectively. The extracted 3D vector data for buildings were integrated with 2D data of the ground footprints and afterwards they were used for 3D modelling of buildings in Google SketchUp software. The final results were compared with the reference data obtained from other sources. It was found that the shape of buildings (in concern to the number of details) had been reconstructed on level of LoD1, when the accuracy of these models corresponded to the level of LoD2.

  5. VHRS STEREO IMAGES FOR 3D MODELLING OF BUILDINGS

    Directory of Open Access Journals (Sweden)

    A. Bujakiewicz

    2012-07-01

    Full Text Available The paper presents the project which was carried out in the Photogrammetric Laboratory of Warsaw University of Technology. The experiment is concerned with the extraction of 3D vector data for buildings creation from 3D photogrammetric model based on the Ikonos stereo images. The model was reconstructed with photogrammetric workstation – Summit Evolution combined with ArcGIS 3D platform. Accuracy of 3D model was significantly improved by use for orientation of pair of satellite images the stereo measured tie points distributed uniformly around the model area in addition to 5 control points. The RMS for model reconstructed on base of the RPC coefficients only were 16,6 m, 2,7 m and 47,4 m, for X, Y and Z coordinates, respectively. By addition of 5 control points the RMS were improved to 0,7 m, 0,7 m 1,0 m, where the best results were achieved when RMS were estimated from deviations in 17 check points (with 5 control pointsand amounted to 0,4 m, 0,5 m and 0,6 m, for X, Y, and Z respectively. The extracted 3D vector data for buildings were integrated with 2D data of the ground footprints and afterwards they were used for 3D modelling of buildings in Google SketchUp software. The final results were compared with the reference data obtained from other sources. It was found that the shape of buildings (in concern to the number of details had been reconstructed on level of LoD1, when the accuracy of these models corresponded to the level of LoD2.

  6. 3D Reconstruction of Human Motion from Monocular Image Sequences.

    Science.gov (United States)

    Wandt, Bastian; Ackermann, Hanno; Rosenhahn, Bodo

    2016-08-01

    This article tackles the problem of estimating non-rigid human 3D shape and motion from image sequences taken by uncalibrated cameras. Similar to other state-of-the-art solutions we factorize 2D observations in camera parameters, base poses and mixing coefficients. Existing methods require sufficient camera motion during the sequence to achieve a correct 3D reconstruction. To obtain convincing 3D reconstructions from arbitrary camera motion, our method is based on a-priorly trained base poses. We show that strong periodic assumptions on the coefficients can be used to define an efficient and accurate algorithm for estimating periodic motion such as walking patterns. For the extension to non-periodic motion we propose a novel regularization term based on temporal bone length constancy. In contrast to other works, the proposed method does not use a predefined skeleton or anthropometric constraints and can handle arbitrary camera motion. We achieve convincing 3D reconstructions, even under the influence of noise and occlusions. Multiple experiments based on a 3D error metric demonstrate the stability of the proposed method. Compared to other state-of-the-art methods our algorithm shows a significant improvement.

  7. A 2D and 3D electrical impedance tomography imaging using experimental data

    OpenAIRE

    Shulga, Dmitry

    2012-01-01

    In this paper model, method and results of 2D and 3D conductivity distribution imaging using experimental data are described. The 16-electrodes prototype of computer tomography system, special Matlab and Java software were used to perform imaging procedure. The developed system can be used for experimental conductivity distribution imaging and further research work.

  8. Combined registration of 3D tibia and femur implant models in 3D magnetic resonance images

    Science.gov (United States)

    Englmeier, Karl-Hans; Siebert, Markus; von Eisenhart-Rothe, Ruediger; Graichen, Heiko

    2008-03-01

    The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is essential to determine the position and orientation of the tibial and femoral prosthesis. Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image. This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants. Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm determines all boundary voxels of the original triangular surface data. Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a reasonable starting point. After that an automated gradient-based fine matching process determines the best absolute position and orientation: This iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount until a maximum value of the matching function is reached. To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching process).

  9. Interactive 2D to 3D stereoscopic image synthesis

    Science.gov (United States)

    Feldman, Mark H.; Lipton, Lenny

    2005-03-01

    Advances in stereoscopic display technologies, graphic card devices, and digital imaging algorithms have opened up new possibilities in synthesizing stereoscopic images. The power of today"s DirectX/OpenGL optimized graphics cards together with adapting new and creative imaging tools found in software products such as Adobe Photoshop, provide a powerful environment for converting planar drawings and photographs into stereoscopic images. The basis for such a creative process is the focus of this paper. This article presents a novel technique, which uses advanced imaging features and custom Windows-based software that utilizes the Direct X 9 API to provide the user with an interactive stereo image synthesizer. By creating an accurate and interactive world scene with moveable and flexible depth map altered textured surfaces, perspective stereoscopic cameras with both visible frustums and zero parallax planes, a user can precisely model a virtual three-dimensional representation of a real-world scene. Current versions of Adobe Photoshop provide a creative user with a rich assortment of tools needed to highlight elements of a 2D image, simulate hidden areas, and creatively shape them for a 3D scene representation. The technique described has been implemented as a Photoshop plug-in and thus allows for a seamless transition of these 2D image elements into 3D surfaces, which are subsequently rendered to create stereoscopic views.

  10. Diffractive centrosymmetric 3D-transmission phase gratings positioned at the image plane of optical systems transform lightlike 4D-WORLD as tunable resonators into spectral metrics...

    Science.gov (United States)

    Lauinger, Norbert

    1999-08-01

    Diffractive 3D phase gratings of spherical scatterers dense in hexagonal packing geometry represent adaptively tunable 4D-spatiotemporal filters with trichromatic resonance in visible spectrum. They are described in the (lambda) - chromatic and the reciprocal (nu) -aspects by reciprocal geometric translations of the lightlike Pythagoras theorem, and by the direction cosine for double cones. The most elementary resonance condition in the lightlike Pythagoras theorem is given by the transformation of the grating constants gx, gy, gz of the hexagonal 3D grating to (lambda) h1h2h3 equals (lambda) 111 with cos (alpha) equals 0.5. Through normalization of the chromaticity in the von Laue-interferences to (lambda) 111, the (nu) (lambda) equals (lambda) h1h2h3/(lambda) 111-factor of phase velocity becomes the crucial resonance factor, the 'regulating device' of the spatiotemporal interaction between 3D grating and light, space and time. In the reciprocal space equal/unequal weights and times in spectral metrics result at positions of interference maxima defined by hyperbolas and circles. A database becomes built up by optical interference for trichromatic image preprocessing, motion detection in vector space, multiple range data analysis, patchwide multiple correlations in the spatial frequency spectrum, etc.

  11. 3D Image Reconstruction from Compton camera data

    CERN Document Server

    Kuchment, Peter

    2016-01-01

    In this paper, we address analytically and numerically the inversion of the integral transform (\\emph{cone} or \\emph{Compton} transform) that maps a function on $\\mathbb{R}^3$ to its integrals over conical surfaces. It arises in a variety of imaging techniques, e.g. in astronomy, optical imaging, and homeland security imaging, especially when the so called Compton cameras are involved. Several inversion formulas are developed and implemented numerically in $3D$ (the much simpler $2D$ case was considered in a previous publication).

  12. 3D Viewer Platform of Cloud Clustering Management System: Google Map 3D

    Science.gov (United States)

    Choi, Sung-Ja; Lee, Gang-Soo

    The new management system of framework for cloud envrionemnt is needed by the platfrom of convergence according to computing environments of changes. A ISV and small business model is hard to adapt management system of platform which is offered from super business. This article suggest the clustering management system of cloud computing envirionments for ISV and a man of enterprise in small business model. It applies the 3D viewer adapt from map3D & earth of google. It is called 3DV_CCMS as expand the CCMS[1].

  13. Combining Different Modalities for 3D Imaging of Biological Objects

    CERN Document Server

    Tsyganov, E; Kulkarni, P; Mason, R; Parkey, R; Seliuonine, S; Shay, J; Soesbe, T; Zhezher, V; Zinchenko, A I

    2005-01-01

    A resolution enhanced NaI(Tl)-scintillator micro-SPECT device using pinhole collimator geometry has been built and tested with small animals. This device was constructed based on a depth-of-interaction measurement using a thick scintillator crystal and a position sensitive PMT to measure depth-dependent scintillator light profiles. Such a measurement eliminates the parallax error that degrades the high spatial resolution required for small animal imaging. This novel technique for 3D gamma-ray detection was incorporated into the micro-SPECT device and tested with a $^{57}$Co source and $^{98m}$Tc-MDP injected in mice body. To further enhance the investigating power of the tomographic imaging different imaging modalities can be combined. In particular, as proposed and shown in this paper, the optical imaging permits a 3D reconstruction of the animal's skin surface thus improving visualization and making possible depth-dependent corrections, necessary for bioluminescence 3D reconstruction in biological objects. ...

  14. Comparison of 3D Synthetic Aperture Imaging and Explososcan using Phantom Measurements

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer; Férin, Guillaume; Dufait, Rémi

    2012-01-01

    In this paper, initial 3D ultrasound measurements from a 1024 channel system are presented. Measurements of 3D Synthetic aperture imaging (SAI) and Explososcan are presented and compared. Explososcan is the ’gold standard’ for real-time 3D medical ultrasound imaging. SAI is compared to Explososcan...... by using tissue and wire phantom measurements. The measurements are carried out using a 1024 element 2D transducer and the 1024 channel experimental ultrasound scanner SARUS. To make a fair comparison, the two imaging techniques use the same number of active channels, the same number of emissions per frame...

  15. A real-time noise filtering strategy for photon counting 3D imaging lidar.

    Science.gov (United States)

    Zhang, Zijing; Zhao, Yuan; Zhang, Yong; Wu, Long; Su, Jianzhong

    2013-04-22

    For a direct-detection 3D imaging lidar, the use of Geiger mode avalanche photodiode (Gm-APD) could greatly enhance the detection sensitivity of the lidar system since each range measurement requires a single detected photon. Furthermore, Gm-APD offers significant advantages in reducing the size, mass, power and complexity of the system. However the inevitable noise, including the background noise, the dark count noise and so on, remains a significant challenge to obtain a clear 3D image of the target of interest. This paper presents a smart strategy, which can filter out false alarms in the stage of acquisition of raw time of flight (TOF) data and obtain a clear 3D image in real time. As a result, a clear 3D image is taken from the experimental system despite the background noise of the sunny day.

  16. A new imaging 2D and 3D for musculo-skeletal physiology and pathology with low radiation dose and standing position: the EOS system; Une nouvelle imagerie osteo-articulaire basse dose en position debout: le systeme EOS

    Energy Technology Data Exchange (ETDEWEB)

    Dubousset, J. [Academie Nationale de Medecine, et Hopital Saint Vincent de Paul, Service de Chirurgie Orthopedique, 75 - Paris (France); Charpak, G.; Dorion, I. [Biospace, Instruments, 75 - Paris (France); Skalli, W.; Lavaste, F. [Ecole Nationale Superieure des Arts et Metiers, 75 - Paris (France); Deguise, J. [Laboratoire de Recherche en Imagerie Orthopedique, Montreal (Canada); Kalifa, G.; Ferey, S. [Hopital Saint Vincent de Paul, Service de Radiologie, 75 - Paris (France)

    2005-06-01

    Very precise combined work between multidisciplinary partners (radiation engineers in physics, engineers in bio-mechanics, medical radiologists and orthopedic pediatric surgeons) lead to the concept and development of a new low dose radiation device named EOS. This device allows 3 main advantages: (1) thanks to the invention of Georges Charpak (Nobel Price 1992) who designed gaseous detectors for X-rays, the reduction of dose necessary to obtain a good image of skeletal system was 8 to 10 times less for 2D imaging, compared to the dose necessary to obtain a 3D reconstruction from CT scan cuts the reduction factor was 800 to 1000. (2) The accuracy of 3D reconstruction obtained is as good as a 3D reconstruction from CT scan cuts. (3) The patient in addition get its imaging in standing functional position thank to the X-rays obtained from head to feet simultaneously AP and lateral. This is a big advantage compared to CT scan used only in lying position. From this simultaneous AP and lateral X-rays of the whole body thanks to the 3D bone external envelop technique, the engineers in bio-mechanics allowed to obtain 3D reconstruction of every level of osteo-articular system in standing position with an acceptable period of time (15 to 30 minutes). This (in spite of the evolution of standing MRI) allows more precise bone reconstruction in orthopedics especially at the level of spine, lower limbs, etc. In addition the fact to study the entire skeleton in standing functional position instead of small segmented studies given by CT scan in lying position produce a real improvement as well for physiology as for pathology of bone and joints disorders and especially for spinal pathology. (author)

  17. 3D scene reconstruction based on 3D laser point cloud combining UAV images

    Science.gov (United States)

    Liu, Huiyun; Yan, Yangyang; Zhang, Xitong; Wu, Zhenzhen

    2016-03-01

    It is a big challenge capturing and modeling 3D information of the built environment. A number of techniques and technologies are now in use. These include GPS, and photogrammetric application and also remote sensing applications. The experiment uses multi-source data fusion technology for 3D scene reconstruction based on the principle of 3D laser scanning technology, which uses the laser point cloud data as the basis and Digital Ortho-photo Map as an auxiliary, uses 3DsMAX software as a basic tool for building three-dimensional scene reconstruction. The article includes data acquisition, data preprocessing, 3D scene construction. The results show that the 3D scene has better truthfulness, and the accuracy of the scene meet the need of 3D scene construction.

  18. Image Appraisal for 2D and 3D Electromagnetic Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Alumbaugh, D.L.; Newman, G.A.

    1999-01-28

    Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.

  19. Embryonic staging using a 3D virtual reality system

    NARCIS (Netherlands)

    C.M. Verwoerd-Dikkeboom (Christine); A.H.J. Koning (Anton); P.J. van der Spek (Peter); N. Exalto (Niek); R.P.M. Steegers-Theunissen (Régine)

    2008-01-01

    textabstractBACKGROUND: The aim of this study was to demonstrate that Carnegie Stages could be assigned to embryos visualized with a 3D virtual reality system. METHODS: We analysed 48 3D ultrasound scans of 19 IVF/ICSI pregnancies at 7-10 weeks' gestation. These datasets were visualized as 3D 'holog

  20. 3D reconstruction of concave surfaces using polarisation imaging

    Science.gov (United States)

    Sohaib, A.; Farooq, A. R.; Ahmed, J.; Smith, L. N.; Smith, M. L.

    2015-06-01

    This paper presents a novel algorithm for improved shape recovery using polarisation-based photometric stereo. The majority of previous research using photometric stereo involves 3D reconstruction using both the diffuse and specular components of light; however, this paper suggests the use of the specular component only as it is the only form of light that comes directly off the surface without subsurface scattering or interreflections. Experiments were carried out on both real and synthetic surfaces. Real images were obtained using a polarisation-based photometric stereo device while synthetic images were generated using PovRay® software. The results clearly demonstrate that the proposed method can extract three-dimensional (3D) surface information effectively even for concave surfaces with complex texture and surface reflectance.

  1. [3D imaging benefits in clinical pratice of orthodontics].

    Science.gov (United States)

    Frèrejouand, Emmanuel

    2016-12-01

    3D imaging possibilities raised up in the last few years in the orthodontic field. In 2016, it can be used for diagnosis improvement and treatment planning by using digital set up combined to CBCT. It is relevant for orthodontic mechanic updating by creating visible or invisible customised appliances. It forms the basis of numerous scientific researches. The author explains the progress 3D imaging brings to diagnosis and clinics but also highlights the requirements it creates. The daily use of these processes in orthodontic clinical practices needs to be regulated regarding the benefit/risk ratio and the patient satisfaction. The command of the digital work flow created by these technics requires habits modifications from the orthodontist and his staff. © EDP Sciences, SFODF, 2016.

  2. Simulating receptive fields of human visual cortex for 3D image quality prediction.

    Science.gov (United States)

    Shao, Feng; Chen, Wanting; Lin, Wenchong; Jiang, Qiuping; Jiang, Gangyi

    2016-07-20

    Quality assessment of 3D images presents many challenges when attempting to gain better understanding of the human visual system. In this paper, we propose a new 3D image quality prediction approach by simulating receptive fields (RFs) of human visual cortex. To be more specific, we extract the RFs from a complete visual pathway, and calculate their similarity indices between the reference and distorted 3D images. The final quality score is obtained by determining their connections via support vector regression. Experimental results on three 3D image quality assessment databases demonstrate that in comparison with the most relevant existing methods, the devised algorithm achieves high consistency alignment with subjective assessment, especially for asymmetrically distorted stereoscopic images.

  3. 3D reconstruction of multiple stained histology images

    Directory of Open Access Journals (Sweden)

    Yi Song

    2013-01-01

    Full Text Available Context: Three dimensional (3D tissue reconstructions from the histology images with different stains allows the spatial alignment of structural and functional elements highlighted by different stains for quantitative study of many physiological and pathological phenomena. This has significant potential to improve the understanding of the growth patterns and the spatial arrangement of diseased cells, and enhance the study of biomechanical behavior of the tissue structures towards better treatments (e.g. tissue-engineering applications. Methods: This paper evaluates three strategies for 3D reconstruction from sets of two dimensional (2D histological sections with different stains, by combining methods of 2D multi-stain registration and 3D volumetric reconstruction from same stain sections. Setting and Design: The different strategies have been evaluated on two liver specimens (80 sections in total stained with Hematoxylin and Eosin (H and E, Sirius Red, and Cytokeratin (CK 7. Results and Conclusion: A strategy of using multi-stain registration to align images of a second stain to a volume reconstructed by same-stain registration results in the lowest overall error, although an interlaced image registration approach may be more robust to poor section quality.

  4. Joint calibration of 3D resist image and CDSEM

    Science.gov (United States)

    Chou, C. S.; He, Y. Y.; Tang, Y. P.; Chang, Y. T.; Huang, W. C.; Liu, R. G.; Gau, T. S.

    2013-04-01

    Traditionally, an optical proximity correction model is to evaluate the resist image at a specific depth within the photoresist and then extract the resist contours from the image. Calibration is generally implemented by comparing resist contours with the critical dimensions (CD). The wafer CD is usually collected by a scanning electron microscope (SEM), which evaluates the CD based on some criterion that is a function of gray level, differential signal, threshold or other parameters set by the SEM. However, the criterion does not reveal which depth the CD is obtained at. This depth inconsistency between modeling and SEM makes the model calibration difficult for low k1 images. In this paper, the vertical resist profile is obtained by modifying the model from planar (2D) to quasi-3D approach and comparing the CD from this new model with SEM CD. For this quasi-3D model, the photoresist diffusion along the depth of the resist is considered and the 3D photoresist contours are evaluated. The performance of this new model is studied and is better than the 2D model.

  5. Discrete Method of Images for 3D Radio Propagation Modeling

    Science.gov (United States)

    Novak, Roman

    2016-09-01

    Discretization by rasterization is introduced into the method of images (MI) in the context of 3D deterministic radio propagation modeling as a way to exploit spatial coherence of electromagnetic propagation for fine-grained parallelism. Traditional algebraic treatment of bounding regions and surfaces is replaced by computer graphics rendering of 3D reflections and double refractions while building the image tree. The visibility of reception points and surfaces is also resolved by shader programs. The proposed rasterization is shown to be of comparable run time to that of the fundamentally parallel shooting and bouncing rays. The rasterization does not affect the signal evaluation backtracking step, thus preserving its advantage over the brute force ray-tracing methods in terms of accuracy. Moreover, the rendering resolution may be scaled back for a given level of scenario detail with only marginal impact on the image tree size. This allows selection of scene optimized execution parameters for faster execution, giving the method a competitive edge. The proposed variant of MI can be run on any GPU that supports real-time 3D graphics.

  6. Preliminary examples of 3D vector flow imaging

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes; Stuart, Matthias Bo; Tomov, Borislav Gueorguiev

    2013-01-01

    and visualized using three alternative approaches. Practically no in-plane motion (vx and vz) is measured, whereas the out-of-plane motion (vy) and the velocity magnitude exhibit the expected 2D circular-symmetric parabolic shape. It shown that the ultrasound method is suitable for real-time data acquisition...... ultrasound scanner SARUS on a flow rig system with steady flow. The vessel of the flow-rig is centered at a depth of 30 mm, and the flow has an expected 2D circular-symmetric parabolic prole with a peak velocity of 1 m/s. Ten frames of 3D vector flow images are acquired in a cross-sectional plane orthogonal...... to the center axis of the vessel, which coincides with the y-axis and the flow direction. Hence, only out-of-plane motion is expected. This motion cannot be measured by typical commercial scanners employing 1D arrays. Each frame consists of 16 flow lines steered from -15 to 15 degrees in steps of 2 degrees...

  7. Expert System for 3D Collar Intelligent Design

    Institute of Scientific and Technical Information of China (English)

    LIU Yan; GENG Zhao-feng

    2004-01-01

    A method to set up 3D collar prototype is developed in this paper by using the technique of cubic spline and bicubic surface patch. Then the relationship between the parameters of 3D collar prototype and different collar styles are studied. Based on the relationship, we can develop some algorithms of transferring style requirements to the parameters value of the collar prototype, and obtain some generation rules for the design of 3D collar style. As such, the knowledge base can be constructed, and the intelligent design system of 3D collar style is built. Using the system, various 3D collar styles can be designed automatically to satisfy various style requirements.

  8. Random-Profiles-Based 3D Face Recognition System

    Directory of Open Access Journals (Sweden)

    Joongrock Kim

    2014-03-01

    Full Text Available In this paper, a noble nonintrusive three-dimensional (3D face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.

  9. 3D thermal medical image visualization tool: Integration between MRI and thermographic images.

    Science.gov (United States)

    Abreu de Souza, Mauren; Chagas Paz, André Augusto; Sanches, Ionildo Jóse; Nohama, Percy; Gamba, Humberto Remigio

    2014-01-01

    Three-dimensional medical image reconstruction using different images modalities require registration techniques that are, in general, based on the stacking of 2D MRI/CT images slices. In this way, the integration of two different imaging modalities: anatomical (MRI/CT) and physiological information (infrared image), to generate a 3D thermal model, is a new methodology still under development. This paper presents a 3D THERMO interface that provides flexibility for the 3D visualization: it incorporates the DICOM parameters; different color scale palettes at the final 3D model; 3D visualization at different planes of sections; and a filtering option that provides better image visualization. To summarize, the 3D thermographc medical image visualization provides a realistic and precise medical tool. The merging of two different imaging modalities allows better quality and more fidelity, especially for medical applications in which the temperature changes are clinically significant.

  10. Determining optimum red filter slide distance on creating 3D electron microscope images using anaglyph method

    Science.gov (United States)

    Tresna, W. P.; Isnaeni

    2017-04-01

    Scanning Electron Microscope (SEM) is a proven instrument for analyzing material in which a 2D image of an object is produced. However, the optimization of a 3D image in the SEM system is usually difficult and costly. There is a simple method to produce a 3D image by using two light sources with a red and a blue filter combined in a certain angle. In this experiment, the authors conducted a simulation of the 3D image formation using anaglyph method by finding the optimum point of shifting the red and blue filters in an SEM image. The method used in this experiment was an image processing that employed a digital manipulation on a certain deviation distance of the central point of the main object. The simulation result of an SEM image with a magnification of 5000 times showed an optimal 3D effect that was achieved when the red filter was shifted by 1 μm to the right and the blue filter was shifted by 1 µm to the left from the central position. The result of this simulation can be used to understand better the viewing angle and the optimal position of the two light sources, i.e. red and blue filter pairs. The produced 3D image can be clearly seen using 3D glasses.

  11. Underwater 3d Modeling: Image Enhancement and Point Cloud Filtering

    Science.gov (United States)

    Sarakinou, I.; Papadimitriou, K.; Georgoula, O.; Patias, P.

    2016-06-01

    This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images' radiometry (captured at shallow depths) and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software). Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck) captured at three different depths (3.5m, 10m and 14m respectively). Four models have been created from the first dataset (seafloor) in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a) the definition of parameters for the point cloud filtering and the creation of a reference model, b) the radiometric editing of images, followed by the creation of three improved models and c) the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m) and different objects (part of a wreck and a small boat's wreck) in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  12. Phase Sensitive Cueing for 3D Objects in Overhead Images

    Energy Technology Data Exchange (ETDEWEB)

    Paglieroni, D W; Eppler, W G; Poland, D N

    2005-02-18

    A 3D solid model-aided object cueing method that matches phase angles of directional derivative vectors at image pixels to phase angles of vectors normal to projected model edges is described. It is intended for finding specific types of objects at arbitrary position and orientation in overhead images, independent of spatial resolution, obliqueness, acquisition conditions, and type of imaging sensor. It is shown that the phase similarity measure can be efficiently evaluated over all combinations of model position and orientation using the FFT. The highest degree of similarity over all model orientations is captured in a match surface of similarity values vs. model position. Unambiguous peaks in this surface are sorted in descending order of similarity value, and the small image thumbnails that contain them are presented to human analysts for inspection in sorted order.

  13. Remarks on 3D human body posture reconstruction from multiple camera images

    Science.gov (United States)

    Nagasawa, Yusuke; Ohta, Takako; Mutsuji, Yukiko; Takahashi, Kazuhiko; Hashimoto, Masafumi

    2007-12-01

    This paper proposes a human body posture estimation method based on back projection of human silhouette images extracted from multi-camera images. To achieve real-time 3D human body posture estimation, a server-client system is introduced into the multi-camera system, improvements of the background subtraction and back projection are investigated. To evaluate the feasibility of the proposed method, 3D estimation experiments of human body posture are carried out. The experimental system with six CCD cameras is composed and the experimental results confirm both the feasibility and effectiveness of the proposed system in the 3D human body posture estimation in real-time. By using the 3D reconstruction of human body posture, the simple walk-through application of virtual reality system is demonstrated.

  14. Advanced 3-D analysis, client-server systems, and cloud computing—Integration of cardiovascular imaging data into clinical workflows of transcatheter aortic valve replacement

    Science.gov (United States)

    Zimmermann, Mathis; Falkner, Juergen

    2013-01-01

    Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR. PMID:24282750

  15. Advanced 3-D analysis, client-server systems, and cloud computing-Integration of cardiovascular imaging data into clinical workflows of transcatheter aortic valve replacement.

    Science.gov (United States)

    Schoenhagen, Paul; Zimmermann, Mathis; Falkner, Juergen

    2013-06-01

    Degenerative aortic stenosis is highly prevalent in the aging populations of industrialized countries and is associated with poor prognosis. Surgical valve replacement has been the only established treatment with documented improvement of long-term outcome. However, many of the older patients with aortic stenosis (AS) are high-risk or ineligible for surgery. For these patients, transcatheter aortic valve replacement (TAVR) has emerged as a treatment alternative. The TAVR procedure is characterized by a lack of visualization of the operative field. Therefore, pre- and intra-procedural imaging is critical for patient selection, pre-procedural planning, and intra-operative decision-making. Incremental to conventional angiography and 2-D echocardiography, multidetector computed tomography (CT) has assumed an important role before TAVR. The analysis of 3-D CT data requires extensive post-processing during direct interaction with the dataset, using advance analysis software. Organization and storage of the data according to complex clinical workflows and sharing of image information have become a critical part of these novel treatment approaches. Optimally, the data are integrated into a comprehensive image data file accessible to multiple groups of practitioners across the hospital. This creates new challenges for data management requiring a complex IT infrastructure, spanning across multiple locations, but is increasingly achieved with client-server solutions and private cloud technology. This article describes the challenges and opportunities created by the increased amount of patient-specific imaging data in the context of TAVR.

  16. 3D workflow for HDR image capture of projection systems and objects for CAVE virtual environments authoring with wireless touch-sensitive devices

    Science.gov (United States)

    Prusten, Mark J.; McIntyre, Michelle; Landis, Marvin

    2006-02-01

    A 3D workflow pipeline is presented for High Dynamic Range (HDR) image capture of projected scenes or objects for presentation in CAVE virtual environments. The methods of HDR digital photography of environments vs. objects are reviewed. Samples of both types of virtual authoring being the actual CAVE environment and a sculpture are shown. A series of software tools are incorporated into a pipeline called CAVEPIPE, allowing for high-resolution objects and scenes to be composited together in natural illumination environments [1] and presented in our CAVE virtual reality environment. We also present a way to enhance the user interface for CAVE environments. The traditional methods of controlling the navigation through virtual environments include: glove, HUD's and 3D mouse devices. By integrating a wireless network that includes both WiFi (IEEE 802.11b/g) and Bluetooth (IEEE 802.15.1) protocols the non-graphical input control device can be eliminated. Therefore wireless devices can be added that would include: PDA's, Smart Phones, TabletPC's, Portable Gaming consoles, and PocketPC's.

  17. High-throughput imaging: Focusing in on drug discovery in 3D.

    Science.gov (United States)

    Li, Linfeng; Zhou, Qiong; Voss, Ty C; Quick, Kevin L; LaBarbera, Daniel V

    2016-03-01

    3D organotypic culture models such as organoids and multicellular tumor spheroids (MCTS) are becoming more widely used for drug discovery and toxicology screening. As a result, 3D culture technologies adapted for high-throughput screening formats are prevalent. While a multitude of assays have been reported and validated for high-throughput imaging (HTI) and high-content screening (HCS) for novel drug discovery and toxicology, limited HTI/HCS with large compound libraries have been reported. Nonetheless, 3D HTI instrumentation technology is advancing and this technology is now on the verge of allowing for 3D HCS of thousands of samples. This review focuses on the state-of-the-art high-throughput imaging systems, including hardware and software, and recent literature examples of 3D organotypic culture models employing this technology for drug discovery and toxicology screening.

  18. UNDERWATER 3D MODELING: IMAGE ENHANCEMENT AND POINT CLOUD FILTERING

    Directory of Open Access Journals (Sweden)

    I. Sarakinou

    2016-06-01

    Full Text Available This paper examines the results of image enhancement and point cloud filtering on the visual and geometric quality of 3D models for the representation of underwater features. Specifically it evaluates the combination of effects from the manual editing of images’ radiometry (captured at shallow depths and the selection of parameters for point cloud definition and mesh building (processed in 3D modeling software. Such datasets, are usually collected by divers, handled by scientists and used for geovisualization purposes. In the presented study, have been created 3D models from three sets of images (seafloor, part of a wreck and a small boat's wreck captured at three different depths (3.5m, 10m and 14m respectively. Four models have been created from the first dataset (seafloor in order to evaluate the results from the application of image enhancement techniques and point cloud filtering. The main process for this preliminary study included a the definition of parameters for the point cloud filtering and the creation of a reference model, b the radiometric editing of images, followed by the creation of three improved models and c the assessment of results by comparing the visual and the geometric quality of improved models versus the reference one. Finally, the selected technique is tested on two other data sets in order to examine its appropriateness for different depths (at 10m and 14m and different objects (part of a wreck and a small boat's wreck in the context of an ongoing research in the Laboratory of Photogrammetry and Remote Sensing.

  19. Confocal Image 3D Surface Measurement with Optical Fiber Plate

    Institute of Scientific and Technical Information of China (English)

    WANG Zhao; ZHU Sheng-cheng; LI Bing; TAN Yu-shan

    2004-01-01

    A whole-field 3D surface measurement system for semiconductor wafer inspection is described.The system consists of an optical fiber plate,which can split the light beam into N2 subbeams to realize the whole-field inspection.A special prism is used to separate the illumination light and signal light.This setup is characterized by high precision,high speed and simple structure.

  20. A Modular and Affordable Time-Lapse Imaging and Incubation System Based on 3D Printed Parts, a Smartphone, and Off-The-Shelf Electronics

    OpenAIRE

    Hernández Vera, Rodrigo; Schwan, Emil; Fatsis-Kavalopoulos, Nikos; Kreuger, Johan

    2016-01-01

    Time-lapse imaging is a powerful tool for studying cellular dynamics and cell behavior over long periods of time to acquire detailed functional information. However, commercially available time-lapse imaging systems are expensive and this has limited a broader implementation of this technique in low-resource environments. Further, the availability of time-lapse imaging systems often present workflow bottlenecks in well-funded institutions. To address these limitations we have designed a modul...

  1. The application of camera calibration in range-gated 3D imaging technology

    Science.gov (United States)

    Liu, Xiao-quan; Wang, Xian-wei; Zhou, Yan

    2013-09-01

    Range-gated laser imaging technology was proposed in 1966 by LF Gillespiethe in U.S. Army Night Vision Laboratory(NVL). Using pulse laser and intensified charge-coupled device(ICCD) as light source and detector respectively, range-gated laser imaging technology can realize space-slice imaging while restraining the atmospheric backs-catter, and in turn detect the target effectively, by controlling the delay between the laser pulse and strobe. Owing to the constraints of the development of key components such as narrow pulse laser and gated imaging devices, the research has been progressed slowly in the next few decades. Until the beginning of this century, as the hardware technology continues to mature, this technology has developed rapidly in fields such as night vision, underwater imaging, biomedical imaging, three-dimensional imaging, especially range-gated three-dimensional(3-D) laser imaging field purposing of access to target spatial information. 3-D reconstruction is the processing of restoration of 3-D objects visible surface geometric structure from three-dimensional(2-D) image. Range-gated laser imaging technology can achieve gated imaging of slice space to form a slice image, and in turn provide the distance information corresponding to the slice image. But to inverse the information of 3-D space, we need to obtain the imaging visual field of system, that is, the focal length of the system. Then based on the distance information of the space slice, the spatial information of each unit space corresponding to each pixel can be inversed. Camera calibration is an indispensable step in 3-D reconstruction, including analysis of the internal structure of camera parameters and the external parameters . In order to meet the technical requirements of the range-gated 3-D imaging, this paper intends to study the calibration of the zoom lens system. After summarizing the camera calibration technique comprehensively, a classic calibration method based on line is

  2. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    Science.gov (United States)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  3. Statistical skull models from 3D X-ray images

    CERN Document Server

    Berar, M; Bailly, G; Payan, Y; Berar, Maxime; Desvignes, Michel; Payan, Yohan

    2006-01-01

    We present 2 statistical models of the skull and mandible built upon an elastic registration method of 3D meshes. The aim of this work is to relate degrees of freedom of skull anatomy, as static relations are of main interest for anthropology and legal medicine. Statistical models can effectively provide reconstructions together with statistical precision. In our applications, patient-specific meshes of the skull and the mandible are high-density meshes, extracted from 3D CT scans. All our patient-specific meshes are registrated in a subject-shared reference system using our 3D-to-3D elastic matching algorithm. Registration is based upon the minimization of a distance between the high density mesh and a shared low density mesh, defined on the vertexes, in a multi resolution approach. A Principal Component analysis is performed on the normalised registrated data to build a statistical linear model of the skull and mandible shape variation. The accuracy of the reconstruction is under the millimetre in the shape...

  4. USO DE IMÁGENES 3D DEL SISTEMA VENTRICULAR ENCEFALICO OBTENIDAS POR SISTEMA DE NEURONAVEGACIÓN EN LA ENSEÑANZA DE LA NEUROANATOMÍA EN EL PREGRADO. TRIDIMENSIONAL IMAGES OF THE VENTRICULAR SYSTEM OBTAINED IN A NEURONAVIGATOR SYSTEM AS A TOOL FOR NEUROANATOMY TEACHING-LEARNING.

    OpenAIRE

    Fonoff, Erich T; Eduardo J.L Alho; Gonzalo Estapé Carriquiry; Fernando Martínez Benia

    2010-01-01

    Introduction: Anatomy of cerebral ventricles is very complex. Classically, ventricular system anatomy has been taught employing cadaveric brains and CT or MRI images. We present 3D images of the ventricular system obtained by neuronavigation system and the results of its use in teaching anatomy of cerebral ventricles.Material and methods: Magnetic resonance images of three patients were obtained. These images were transferred to a neuronavigation system, and a 3D reconstruction of cerebral ve...

  5. Intelligent multisensor concept for image-guided 3D object measurement with scanning laser radar

    Science.gov (United States)

    Weber, Juergen

    1995-08-01

    This paper presents an intelligent multisensor concept for measuring 3D objects using an image guided laser radar scanner. The field of application are all kinds of industrial inspection and surveillance tasks where it is necessary to detect, measure and recognize 3D objects in distances up to 10 m with high flexibility. Such applications might be the surveillance of security areas or container storages as well as navigation and collision avoidance of autonomous guided vehicles. The multisensor system consists of a standard CCD matrix camera and a 1D laser radar ranger which is mounted to a 2D mirror scanner. With this sensor combination it is possible to acquire gray scale intensity data as well as absolute 3D information. To improve the system performance and flexibility, the intensity data of the scene captured by the camera can be used to focus the measurement of the 3D sensor to relevant areas. The camera guidance of the laser scanner is useful because the acquisition of spatial information is relatively slow compared to the image sensor's ability to snap an image frame in 40 ms. Relevant areas in a scene are located by detecting edges of objects utilizing various image processing algorithms. The complete sensor system is controlled by three microprocessors carrying out the 3D data acquisition, the image processing tasks and the multisensor integration. The paper deals with the details of the multisensor concept. It describes the process of sensor guidance and 3D measurement and presents some practical results of our research.

  6. Applications of Panoramic Images: from 720° Panorama to Interior 3d Models of Augmented Reality

    Science.gov (United States)

    Lee, I.-C.; Tsai, F.

    2015-05-01

    A series of panoramic images are usually used to generate a 720° panorama image. Although panoramic images are typically used for establishing tour guiding systems, in this research, we demonstrate the potential of using panoramic images acquired from multiple sites to create not only 720° panorama, but also three-dimensional (3D) point clouds and 3D indoor models. Since 3D modeling is one of the goals of this research, the location of the panoramic sites needed to be carefully planned in order to maintain a robust result for close-range photogrammetry. After the images are acquired, panoramic images are processed into 720° panoramas, and these panoramas which can be used directly as panorama guiding systems or other applications. In addition to these straightforward applications, interior orientation parameters can also be estimated while generating 720° panorama. These parameters are focal length, principle point, and lens radial distortion. The panoramic images can then be processed with closerange photogrammetry procedures to extract the exterior orientation parameters and generate 3D point clouds. In this research, VisaulSFM, a structure from motion software is used to estimate the exterior orientation, and CMVS toolkit is used to generate 3D point clouds. Next, the 3D point clouds are used as references to create building interior models. In this research, Trimble Sketchup was used to build the model, and the 3D point cloud was added to the determining of locations of building objects using plane finding procedure. In the texturing process, the panorama images are used as the data source for creating model textures. This 3D indoor model was used as an Augmented Reality model replacing a guide map or a floor plan commonly used in an on-line touring guide system. The 3D indoor model generating procedure has been utilized in two research projects: a cultural heritage site at Kinmen, and Taipei Main Station pedestrian zone guidance and navigation system. The

  7. Concept of Indoor 3D-Route UAV Scheduling System

    DEFF Research Database (Denmark)

    Khosiawan, Yohanes; Nielsen, Izabela Ewa; Do, Ngoc Ang Dung;

    2016-01-01

    environment. On top of that, the multi-source productive best-first-search concept also supports efficient real-time scheduling in response to uncertain events. Without human intervention, the proposed work provides an automatic scheduling system for UAV routing problem in 3D indoor environment.......The objective of the proposed concept is to develop a methodology to support Unmanned Aerial Vehicles (UAVs) operation with a path planning and scheduling system in 3D environments. The proposed 3D path-planning and scheduling allows the system to schedule UAVs routing to perform tasks in 3D indoor...

  8. 2D and 3D MALDI-imaging: conceptual strategies for visualization and data mining.

    Science.gov (United States)

    Thiele, Herbert; Heldmann, Stefan; Trede, Dennis; Strehlow, Jan; Wirtz, Stefan; Dreher, Wolfgang; Berger, Judith; Oetjen, Janina; Kobarg, Jan Hendrik; Fischer, Bernd; Maass, Peter

    2014-01-01

    3D imaging has a significant impact on many challenges in life sciences, because biology is a 3-dimensional phenomenon. Current 3D imaging-technologies (various types MRI, PET, SPECT) are labeled, i.e. they trace the localization of a specific compound in the body. In contrast, 3D MALDI mass spectrometry-imaging (MALDI-MSI) is a label-free method imaging the spatial distribution of molecular compounds. It complements 3D imaging labeled methods, immunohistochemistry, and genetics-based methods. However, 3D MALDI-MSI cannot tap its full potential due to the lack of statistical methods for analysis and interpretation of large and complex 3D datasets. To overcome this, we established a complete and robust 3D MALDI-MSI pipeline combined with efficient computational data analysis methods for 3D edge preserving image denoising, 3D spatial segmentation as well as finding colocalized m/z values, which will be reviewed here in detail. Furthermore, we explain, why the integration and correlation of the MALDI imaging data with other imaging modalities allows to enhance the interpretation of the molecular data and provides visualization of molecular patterns that may otherwise not be apparent. Therefore, a 3D data acquisition workflow is described generating a set of 3 different dimensional images representing the same anatomies. First, an in-vitro MRI measurement is performed which results in a three-dimensional image modality representing the 3D structure of the measured object. After sectioning the 3D object into N consecutive slices, all N slices are scanned using an optical digital scanner, enabling for performing the MS measurements. Scanning the individual sections results into low-resolution images, which define the base coordinate system for the whole pipeline. The scanned images conclude the information from the spatial (MRI) and the mass spectrometric (MALDI-MSI) dimension and are used for the spatial three-dimensional reconstruction of the object performed by image

  9. Augmented reality navigation with automatic marker-free image registration using 3-D image overlay for dental surgery.

    Science.gov (United States)

    Wang, Junchen; Suenaga, Hideyuki; Hoshi, Kazuto; Yang, Liangjing; Kobayashi, Etsuko; Sakuma, Ichiro; Liao, Hongen

    2014-04-01

    Computer-assisted oral and maxillofacial surgery (OMS) has been rapidly evolving since the last decade. State-of-the-art surgical navigation in OMS still suffers from bulky tracking sensors, troublesome image registration procedures, patient movement, loss of depth perception in visual guidance, and low navigation accuracy. We present an augmented reality navigation system with automatic marker-free image registration using 3-D image overlay and stereo tracking for dental surgery. A customized stereo camera is designed to track both the patient and instrument. Image registration is performed by patient tracking and real-time 3-D contour matching, without requiring any fiducial and reference markers. Real-time autostereoscopic 3-D imaging is implemented with the help of a consumer-level graphics processing unit. The resulting 3-D image of the patient's anatomy is overlaid on the surgical site by a half-silvered mirror using image registration and IP-camera registration to guide the surgeon by exposing hidden critical structures. The 3-D image of the surgical instrument is also overlaid over the real one for an augmented display. The 3-D images present both stereo and motion parallax from which depth perception can be obtained. Experiments were performed to evaluate various aspects of the system; the overall image overlay error of the proposed system was 0.71 mm.

  10. Low cost 3D scanning process using digital image processing

    Science.gov (United States)

    Aguilar, David; Romero, Carlos; Martínez, Fernando

    2017-02-01

    This paper shows the design and building of a low cost 3D scanner, able to digitize solid objects through contactless data acquisition, using active object reflection. 3D scanners are used in different applications such as: science, engineering, entertainment, etc; these are classified in: contact scanners and contactless ones, where the last ones are often the most used but they are expensive. This low-cost prototype is done through a vertical scanning of the object using a fixed camera and a mobile horizontal laser light, which is deformed depending on the 3-dimensional surface of the solid. Using digital image processing an analysis of the deformation detected by the camera was done; it allows determining the 3D coordinates using triangulation. The obtained information is processed by a Matlab script, which gives to the user a point cloud corresponding to each horizontal scanning done. The obtained results show an acceptable quality and significant details of digitalized objects, making this prototype (built on LEGO Mindstorms NXT kit) a versatile and cheap tool, which can be used for many applications, mainly by engineering students.

  11. Effective classification of 3D image data using partitioning methods

    Science.gov (United States)

    Megalooikonomou, Vasileios; Pokrajac, Dragoljub; Lazarevic, Aleksandar; Obradovic, Zoran

    2002-03-01

    We propose partitioning-based methods to facilitate the classification of 3-D binary image data sets of regions of interest (ROIs) with highly non-uniform distributions. The first method is based on recursive dynamic partitioning of a 3-D volume into a number of 3-D hyper-rectangles. For each hyper-rectangle, we consider, as a potential attribute, the number of voxels (volume elements) that belong to ROIs. A hyper-rectangle is partitioned only if the corresponding attribute does not have high discriminative power, determined by statistical tests, but it is still sufficiently large for further splitting. The final discriminative hyper-rectangles form new attributes that are further employed in neural network classification models. The second method is based on maximum likelihood employing non-spatial (k-means) and spatial DBSCAN clustering algorithms to estimate the parameters of the underlying distributions. The proposed methods were experimentally evaluated on mixtures of Gaussian distributions, on realistic lesion-deficit data generated by a simulator conforming to a clinical study, and on synthetic fractal data. Both proposed methods have provided good classification on Gaussian mixtures and on realistic data. However, the experimental results on fractal data indicated that the clustering-based methods were only slightly better than random guess, while the recursive partitioning provided significantly better classification accuracy.

  12. Physically based analysis of deformations in 3D images

    Science.gov (United States)

    Nastar, Chahab; Ayache, Nicholas

    1993-06-01

    We present a physically based deformable model which can be used to track and to analyze the non-rigid motion of dynamic structures in time sequences of 2-D or 3-D medical images. The model considers an object undergoing an elastic deformation as a set of masses linked by springs, where the natural lengths of the springs is set equal to zero, and is replaced by a set of constant equilibrium forces, which characterize the shape of the elastic structure in the absence of external forces. This model has the extremely nice property of yielding dynamic equations which are linear and decoupled for each coordinate, whatever the amplitude of the deformation. It provides a reduced algorithmic complexity, and a sound framework for modal analysis, which allows a compact representation of a general deformation by a reduced number of parameters. The power of the approach to segment, track, and analyze 2-D and 3-D images is demonstrated by a set of experimental results on various complex medical images.

  13. Ultra-realistic 3-D imaging based on colour holography

    Science.gov (United States)

    Bjelkhagen, H. I.

    2013-02-01

    A review of recent progress in colour holography is provided with new applications. Colour holography recording techniques in silver-halide emulsions are discussed. Both analogue, mainly Denisyuk colour holograms, and digitally-printed colour holograms are described and their recent improvements. An alternative to silver-halide materials are the panchromatic photopolymer materials such as the DuPont and Bayer photopolymers which are covered. The light sources used to illuminate the recorded holograms are very important to obtain ultra-realistic 3-D images. In particular the new light sources based on RGB LEDs are described. They show improved image quality over today's commonly used halogen lights. Recent work in colour holography by holographers and companies in different countries around the world are included. To record and display ultra-realistic 3-D images with perfect colour rendering are highly dependent on the correct recording technique using the optimal recording laser wavelengths, the availability of improved panchromatic recording materials and combined with new display light sources.

  14. Framework for 2D-3D image fusion of infrared thermography with preoperative MRI.

    Science.gov (United States)

    Hoffmann, Nico; Weidner, Florian; Urban, Peter; Meyer, Tobias; Schnabel, Christian; Radev, Yordan; Schackert, Gabriele; Petersohn, Uwe; Koch, Edmund; Gumhold, Stefan; Steiner, Gerald; Kirsch, Matthias

    2017-01-23

    Multimodal medical image fusion combines information of one or more images in order to improve the diagnostic value. While previous applications mainly focus on merging images from computed tomography, magnetic resonance imaging (MRI), ultrasonic and single-photon emission computed tomography, we propose a novel approach for the registration and fusion of preoperative 3D MRI with intraoperative 2D infrared thermography. Image-guided neurosurgeries are based on neuronavigation systems, which further allow us track the position and orientation of arbitrary cameras. Hereby, we are able to relate the 2D coordinate system of the infrared camera with the 3D MRI coordinate system. The registered image data are now combined by calibration-based image fusion in order to map our intraoperative 2D thermographic images onto the respective brain surface recovered from preoperative MRI. In extensive accuracy measurements, we found that the proposed framework achieves a mean accuracy of 2.46 mm.

  15. Gonio photometric imaging for recording of reflectance spectra of 3D objects

    Science.gov (United States)

    Miyake, Yoichi; Tsumura, Norimichi; Haneishi, Hideaki; Hayashi, Junichiro

    2002-06-01

    In recent years, it is required to develop a system for 3D capture of archives in museums and galleries. In visualizing of 3D object, it is important to reproduce both color and glossiness accurately. Our final goal is to construct digital archival systems in museum and Internet or virtual museum via World Wide Web. To archive our goal, we have developed the multi-spectral imaging systems to record and estimate reflectance spectra of the art paints based on principal component analysis and Wiener estimation method. In this paper, Gonio photometric imaging method is introduced for recording of 3D object. Five-band images of the object are taken under seven different illuminants angles. The set of five-band images are then analyzed on the basis of both dichromatic reflection model and Phong model to extract Gonio photometric information of the object. Prediction of reproduced images of the object under several illuminants and illumination angles is demonstrated and images that are synthesized with 3D wire frame image taken by 3D digitizer are also presented.

  16. 3D imaging of neutron tracks using confocal microscopy

    Science.gov (United States)

    Gillmore, Gavin; Wertheim, David; Flowers, Alan

    2016-04-01

    Neutron detection and neutron flux assessment are important aspects in monitoring nuclear energy production. Neutron flux measurements can also provide information on potential biological damage from exposure. In addition to the applications for neutron measurement in nuclear energy, neutron detection has been proposed as a method of enhancing neutrino detectors and cosmic ray flux has also been assessed using ground-level neutron detectors. Solid State Nuclear Track Detectors (or SSNTDs) have been used extensively to examine cosmic rays, long-lived radioactive elements, radon concentrations in buildings and the age of geological samples. Passive SSNTDs consisting of a CR-39 plastic are commonly used to measure radon because they respond to incident charged particles such as alpha particles from radon gas in air. They have a large dynamic range and a linear flux response. We have previously applied confocal microscopy to obtain 3D images of alpha particle tracks in SSNTDs from radon track monitoring (1). As a charged particle traverses through the polymer it creates an ionisation trail along its path. The trail or track is normally enhanced by chemical etching to better expose radiation damage, as the damaged area is more sensitive to the etchant than the bulk material. Particle tracks in CR-39 are usually assessed using 2D optical microscopy. In this study 6 detectors were examined using an Olympus OLS4100 LEXT 3D laser scanning confocal microscope (Olympus Corporation, Japan). The detectors had been etched for 2 hours 50 minutes at 85 °C in 6.25M NaOH. Post etch the plastics had been treated with a 10 minute immersion in a 2% acetic acid stop bath, followed by rinsing in deionised water. The detectors examined had been irradiated with a 2mSv neutron dose from an Am(Be) neutron source (producing roughly 20 tracks per mm2). We were able to successfully acquire 3D images of neutron tracks in the detectors studied. The range of track diameter observed was between 4

  17. Calibration for 3D imaging with a single-pixel camera

    Science.gov (United States)

    Gribben, Jeremy; Boate, Alan R.; Boukerche, Azzedine

    2017-02-01

    Traditional methods for calibrating structured light 3D imaging systems often suffer from various sources of error. By enabling our projector to both project images as well as capture them using the same optical path, we turn our DMD based projector into a dual-purpose projector and single-pixel camera (SPC). A coarse-to-fine SPC scanning technique based on coded apertures was developed to detect calibration target points with sub-pixel accuracy. Our new calibration approach shows improved depth measurement accuracy when used in structured light 3D imaging by reducing cumulative errors caused by multiple imaging paths.

  18. Probabilistic models and numerical calculation of system matrix and sensitivity in list-mode MLEM 3D reconstruction of Compton camera images.

    Science.gov (United States)

    Maxim, Voichita; Lojacono, Xavier; Hilaire, Estelle; Krimmer, Jochen; Testa, Etienne; Dauvergne, Denis; Magnin, Isabelle; Prost, Rémy

    2016-01-01

    This paper addresses the problem of evaluating the system matrix and the sensitivity for iterative reconstruction in Compton camera imaging. Proposed models and numerical calculation strategies are compared through the influence they have on the three-dimensional reconstructed images. The study attempts to address four questions. First, it proposes an analytic model for the system matrix. Second, it suggests a method for its numerical validation with Monte Carlo simulated data. Third, it compares analytical models of the sensitivity factors with Monte Carlo simulated values. Finally, it shows how the system matrix and the sensitivity calculation strategies influence the quality of the reconstructed images.

  19. 3D painting documentation: evaluation of conservation conditions with 3D imaging and ranging techniques

    Science.gov (United States)

    Abate, D.; Menna, F.; Remondino, F.; Gattari, M. G.

    2014-06-01

    The monitoring of paintings, both on canvas and wooden support, is a crucial issue for the preservation and conservation of this kind of artworks. Many environmental factors (e.g. humidity, temperature, illumination, etc.), as well as bad conservation practices (e.g. wrong restorations, inappropriate locations, etc.), can compromise the material conditions over time and deteriorate an artwork. The article presents an on-going project realized by a multidisciplinary team composed by the ENEA UTICT 3D GraphLab, the 3D Optical Metrology Unit of the Bruno Kessler Foundation and the Soprintendenza per i Beni Storico Artistici ed Etnoantropologici of Bologna (Italy). The goal of the project is the multi-temporal 3D documentation and monitoring of paintings - at the moment in bad conservation's situation - and the provision of some metrics to quantify the deformations and damages.

  20. Extracting 3D layout from a single image using global image structures.

    Science.gov (United States)

    Lou, Zhongyu; Gevers, Theo; Hu, Ninghang

    2015-10-01

    Extracting the pixel-level 3D layout from a single image is important for different applications, such as object localization, image, and video categorization. Traditionally, the 3D layout is derived by solving a pixel-level classification problem. However, the image-level 3D structure can be very beneficial for extracting pixel-level 3D layout since it implies the way how pixels in the image are organized. In this paper, we propose an approach that first predicts the global image structure, and then we use the global structure for fine-grained pixel-level 3D layout extraction. In particular, image features are extracted based on multiple layout templates. We then learn a discriminative model for classifying the global layout at the image-level. Using latent variables, we implicitly model the sublevel semantics of the image, which enrich the expressiveness of our model. After the image-level structure is obtained, it is used as the prior knowledge to infer pixel-wise 3D layout. Experiments show that the results of our model outperform the state-of-the-art methods by 11.7% for 3D structure classification. Moreover, we show that employing the 3D structure prior information yields accurate 3D scene layout segmentation.

  1. Diattenuation of brain tissue and its impact on 3D polarized light imaging

    Science.gov (United States)

    Menzel, Miriam; Reckfort, Julia; Weigand, Daniel; Köse, Hasan; Amunts, Katrin; Axer, Markus

    2017-01-01

    3D-polarized light imaging (3D-PLI) reconstructs nerve fibers in histological brain sections by measuring their birefringence. This study investigates another effect caused by the optical anisotropy of brain tissue – diattenuation. Based on numerical and experimental studies and a complete analytical description of the optical system, the diattenuation was determined to be below 4 % in rat brain tissue. It was demonstrated that the diattenuation effect has negligible impact on the fiber orientations derived by 3D-PLI. The diattenuation signal, however, was found to highlight different anatomical structures that cannot be distinguished with current imaging techniques, which makes Diattenuation Imaging a promising extension to 3D-PLI. PMID:28717561

  2. Label free cell tracking in 3D tissue engineering constructs with high resolution imaging

    Science.gov (United States)

    Smith, W. A.; Lam, K.-P.; Dempsey, K. P.; Mazzocchi-Jones, D.; Richardson, J. B.; Yang, Y.

    2014-02-01

    Within the field of tissue engineering there is an emphasis on studying 3-D live tissue structures. Consequently, to investigate and identify cellular activities and phenotypes in a 3-D environment for all in vitro experiments, including shape, migration/proliferation and axon projection, it is necessary to adopt an optical imaging system that enables monitoring 3-D cellular activities and morphology through the thickness of the construct for an extended culture period without cell labeling. This paper describes a new 3-D tracking algorithm developed for Cell-IQ®, an automated cell imaging platform, which has been equipped with an environmental chamber optimized to enable capturing time-lapse sequences of live cell images over a long-term period without cell labeling. As an integral part of the algorithm, a novel auto-focusing procedure was developed for phase contrast microscopy equipped with 20x and 40x objectives, to provide a more accurate estimation of cell growth/trajectories by allowing 3-D voxels to be computed at high spatiotemporal resolution and cell density. A pilot study was carried out in a phantom system consisting of horizontally aligned nanofiber layers (with precise spacing between them), to mimic features well exemplified in cellular activities of neuronal growth in a 3-D environment. This was followed by detailed investigations concerning axonal projections and dendritic circuitry formation in a 3-D tissue engineering construct. Preliminary work on primary animal neuronal cells in response to chemoattractant and topographic cue within the scaffolds has produced encouraging results.

  3. In vivo 3D PIXE-micron-CT imaging of Drosophila melanogaster using a contrast agent

    Science.gov (United States)

    Matsuyama, Shigeo; Hamada, Naoki; Ishii, Keizo; Nozawa, Yuichiro; Ohkura, Satoru; Terakawa, Atsuki; Hatori, Yoshinobu; Fujiki, Kota; Fujiwara, Mitsuhiro; Toyama, Sho

    2015-04-01

    In this study, we developed a three-dimensional (3D) computed tomography (CT) in vivo imaging system for imaging small insects with micrometer resolution. The 3D CT imaging system, referred to as 3D PIXE-micron-CT (PIXEμCT), uses characteristic X-rays produced by ion microbeam bombardment of a metal target. PIXEμCT was used to observe the body organs and internal structure of a living Drosophila melanogaster. Although the organs of the thorax were clearly imaged, the digestive organs in the abdominal cavity could not be clearly discerned initially, with the exception of the rectum and the Malpighian tubule. To enhance the abdominal images, a barium sulfate powder radiocontrast agent was added. For the first time, 3D images of the ventriculus of a living D. melanogaster were obtained. Our results showed that PIXEμCT can provide in vivo 3D-CT images that reflect correctly the structure of individual living organs, which is expected to be very useful in biological research.

  4. 基于S3C6410的3D图像构建系统研究%Research on 3D Image Constructing System Based on S3C6410

    Institute of Scientific and Technical Information of China (English)

    吴迪

    2016-01-01

    本文提出了一种基于单摄像头的3D图像构建系统的解决方案。系统硬件以ARM11处理器S3C6410为数据处理核心,并配备CMOS图像采集单元和红外传感单元;以嵌入式Linux操作系统作为系统软/硬件管理调度和协调控制中心,并采用ARM11处理器自带的3D硬件加速器和OpenGL ES软件图形开发库相结合的方式实现3D加速,构建了3D快速建模系统。%In this paper, we propose a solution of 3D image constructing system based on single camera. We use ARM11 proces-sor S3C6410 as the core of data processing which equipped with CMOS image capture unit and infrared sensor unit. In addition, we use Linux operation system as the core of management and coordination control platform for this system. By a combination of using 3D hardware accelerator in ARM11 processor and the graphics development library of OpenGL ES software it realizes 3D acceleration and constructs a rapid 3D modeling system.

  5. 3-D visualization and animation technologies in anatomical imaging

    Science.gov (United States)

    McGhee, John

    2010-01-01

    This paper explores a 3-D computer artist’s approach to the creation of three-dimensional computer-generated imagery (CGI) derived from clinical scan data. Interpretation of scientific imagery, such as magnetic resonance imaging (MRI), is restricted to the eye of the trained medical practitioner in a clinical or scientific context. In the research work described here, MRI data are visualized and interpreted by a 3-D computer artist using the tools of the digital animator to navigate image complexity and widen interaction. In this process, the artefact moves across disciplines; it is no longer tethered to its diagnostic origins. It becomes an object that has visual attributes such as light, texture and composition, and a visual aesthetic of its own. The introduction of these visual attributes provides a platform for improved accessibility by a lay audience. The paper argues that this more artisan approach to clinical data visualization has a potential real-world application as a communicative tool for clinicians and patients during consultation. PMID:20002229

  6. New approach to navigation: matching sequential images to 3D terrain maps

    Science.gov (United States)

    Zhang, Tianxu; Hu, Bo; Li, Wei

    1998-03-01

    In this paper an efficient image matching algorithm is presented for use in aircraft navigation. A sequence images with each two successive images partially overlapped is sensed by a monocular optical system. 3D undulation features are recovered from the image pairs, and then matched against a reference undulation feature map. Finally, the aircraft position is estimated by minimizing Hausdorff distance measure. The simulation experiment using real terrain data is reported.

  7. Algorithm and engineering realization of non-scanning laser 3D imaging system%非扫描激光三维成像系统算法及工程实现

    Institute of Scientific and Technical Information of China (English)

    张颖; 司一冰; 曹昌东; 刘波; 眭晓林

    2015-01-01

    The principle and basic module of non-scanning laser 3D imaging system were firstly introduced,then the processing platform of system signal was constructed based on the FPGA +DSP hardware.The experimental results show that the system can better complete the 3D pseudo color image display.%介绍了非扫描激光三维成像系统的工作原理和基本组成模块,通过采用 FPGA +DSP硬件架构,构建了系统信号处理平台。实验结果表明,系统能够较好地完成图像的三维伪彩色显示。

  8. High Resolution 3D Radar Imaging of Comet Interiors

    Science.gov (United States)

    Asphaug, E. I.; Gim, Y.; Belton, M.; Brophy, J.; Weissman, P. R.; Heggy, E.

    2012-12-01

    Knowing the interiors of comets and other primitive bodies is fundamental to our understanding of how planets formed. We have developed a Discovery-class mission formulation, Comet Radar Explorer (CORE), based on the use of previously flown planetary radar sounding techniques, with the goal of obtaining high resolution 3D images of the interior of a small primitive body. We focus on the Jupiter-Family Comets (JFCs) as these are among the most primitive bodies reachable by spacecraft. Scattered in from far beyond Neptune, they are ultimate targets of a cryogenic sample return mission according to the Decadal Survey. Other suitable targets include primitive NEOs, Main Belt Comets, and Jupiter Trojans. The approach is optimal for small icy bodies ~3-20 km diameter with spin periods faster than about 12 hours, since (a) navigation is relatively easy, (b) radar penetration is global for decameter wavelengths, and (c) repeated overlapping ground tracks are obtained. The science mission can be as short as ~1 month for a fast-rotating JFC. Bodies smaller than ~1 km can be globally imaged, but the navigation solutions are less accurate and the relative resolution is coarse. Larger comets are more interesting, but radar signal is unlikely to be reflected from depths greater than ~10 km. So, JFCs are excellent targets for a variety of reasons. We furthermore focus on the use of Solar Electric Propulsion (SEP) to rendezvous shortly after the comet's perihelion. This approach leaves us with ample power for science operations under dormant conditions beyond ~2-3 AU. This leads to a natural mission approach of distant observation, followed by closer inspection, terminated by a dedicated radar mapping orbit. Radar reflections are obtained from a polar orbit about the icy nucleus, which spins underneath. Echoes are obtained from a sounder operating at dual frequencies 5 and 15 MHz, with 1 and 10 MHz bandwidths respectively. The dense network of echoes is used to obtain global 3D

  9. 3D stacked chips from emerging processes to heterogeneous systems

    CERN Document Server

    Fettweis, Gerhard

    2016-01-01

    This book explains for readers how 3D chip stacks promise to increase the level of on-chip integration, and to design new heterogeneous semiconductor devices that combine chips of different integration technologies (incl. sensors) in a single package of the smallest possible size.  The authors focus on heterogeneous 3D integration, addressing some of the most important challenges in this emerging technology, including contactless, optics-based, and carbon-nanotube-based 3D integration, as well as signal-integrity and thermal management issues in copper-based 3D integration. Coverage also includes the 3D heterogeneous integration of power sources, photonic devices, and non-volatile memories based on new materials systems.   •Provides single-source reference to the latest research in 3D optoelectronic integration: process, devices, and systems; •Explains the use of wireless 3D integration to improve 3D IC reliability and yield; •Describes techniques for monitoring and mitigating thermal behavior in 3D I...

  10. 基于激光影像的物体三维点云获取系统%The acquisition System of 3D Point Cloud Based on Image With Laser

    Institute of Scientific and Technical Information of China (English)

    王震; 刘进

    2013-01-01

    三维点云获取系统能够快速地获取目标物体的几何信息,生成大量点云,将目标的真实三维形态在计算机中可视化的展现出来。本文提出了一种新的三维点云数据获取的方法,应用在自主开发的基于激光影像的物体三维点云获取系统中,即标定激光面映射目标表面点的一维坐标,利用单像摄影测量后方交会和一维坐标的联合解算,得出目标点三维伪坐标。通过坐标逆向旋转恢复,得到真实的三维坐标数据,据此完整地建立目标物体的三维可视化模型。%The acquisition system of 3D point cloud can get the geometric information fast and provide a lot of point cloud data in order to show the object 3D shape on a computer .This paper gives a new method of getting 3D point cloud data that is applied to the self-made system of object 3D point cloud acquisition based on image with laser .The calibra-ted laser area reflects the one dimension coordinates of object surface .Through the combined calculation of one dimen-sion coordinate and resection of single photogrammetry the original 3D coordinates of object points can be obtained . Through reverse rotation we can get the true 3D coordinates and construct the complete 3D visual model of object.

  11. 3D city models completion by fusing lidar and image data

    Science.gov (United States)

    Grammatikopoulos, L.; Kalisperakis, I.; Petsa, E.; Stentoumis, C.

    2015-05-01

    A fundamental step in the generation of visually detailed 3D city models is the acquisition of high fidelity 3D data. Typical approaches employ DSM representations usually derived from Lidar (Light Detection and Ranging) airborne scanning or image based procedures. In this contribution, we focus on the fusion of data from both these methods in order to enhance or complete them. Particularly, we combine an existing Lidar and orthomosaic dataset (used as reference), with a new aerial image acquisition (including both vertical and oblique imagery) of higher resolution, which was carried out in the area of Kallithea, in Athens, Greece. In a preliminary step, a digital orthophoto and a DSM is generated from the aerial images in an arbitrary reference system, by employing a Structure from Motion and dense stereo matching framework. The image-to-Lidar registration is performed by 2D feature (SIFT and SURF) extraction and matching among the two orthophotos. The established point correspondences are assigned with 3D coordinates through interpolation on the reference Lidar surface, are then backprojected onto the aerial images, and finally matched with 2D image features located in the vicinity of the backprojected 3D points. Consequently, these points serve as Ground Control Points with appropriate weights for final orientation and calibration of the images through a bundle adjustment solution. By these means, the aerial imagery which is optimally aligned to the reference dataset can be used for the generation of an enhanced and more accurately textured 3D city model.

  12. 3D X-ray imaging methods in support catheter ablations of cardiac arrhythmias.

    Science.gov (United States)

    Stárek, Zdeněk; Lehar, František; Jež, Jiří; Wolf, Jiří; Novák, Miroslav

    2014-10-01

    Cardiac arrhythmias are a very frequent illness. Pharmacotherapy is not very effective in persistent arrhythmias and brings along a number of risks. Catheter ablation has became an effective and curative treatment method over the past 20 years. To support complex arrhythmia ablations, the 3D X-ray cardiac cavities imaging is used, most frequently the 3D reconstruction of CT images. The 3D cardiac rotational angiography (3DRA) represents a modern method enabling to create CT like 3D images on a standard X-ray machine equipped with special software. Its advantage lies in the possibility to obtain images during the procedure, decreased radiation dose and reduction of amount of the contrast agent. The left atrium model is the one most frequently used for complex atrial arrhythmia ablations, particularly for atrial fibrillation. CT data allow for creation and segmentation of 3D models of all cardiac cavities. Recently, a research has been made proving the use of 3DRA to create 3D models of other cardiac (right ventricle, left ventricle, aorta) and non-cardiac structures (oesophagus). They can be used during catheter ablation of complex arrhythmias to improve orientation during the construction of 3D electroanatomic maps, directly fused with 3D electroanatomic systems and/or fused with fluoroscopy. An intensive development in the 3D model creation and use has taken place over the past years and they became routinely used during catheter ablations of arrhythmias, mainly atrial fibrillation ablation procedures. Further development may be anticipated in the future in both the creation and use of these models.

  13. Online 3D terrain visualisation using Unity 3D game engine: A comparison of different contour intervals terrain data draped with UAV images

    Science.gov (United States)

    Hafiz Mahayudin, Mohd; Che Mat, Ruzinoor

    2016-06-01

    The main objective of this paper is to discuss on the effectiveness of visualising terrain draped with Unmanned Aerial Vehicle (UAV) images generated from different contour intervals using Unity 3D game engine in online environment. The study area that was tested in this project was oil palm plantation at Sintok, Kedah. The contour data used for this study are divided into three different intervals which are 1m, 3m and 5m. ArcGIS software were used to clip the contour data and also UAV images data to be similar size for the overlaying process. The Unity 3D game engine was used as the main platform for developing the system due to its capabilities which can be launch in different platform. The clipped contour data and UAV images data were process and exported into the web format using Unity 3D. Then process continue by publishing it into the web server for comparing the effectiveness of different 3D terrain data (contour data) draped with UAV images. The effectiveness is compared based on the data size, loading time (office and out-of-office hours), response time, visualisation quality, and frame per second (fps). The results were suggest which contour interval is better for developing an effective online 3D terrain visualisation draped with UAV images using Unity 3D game engine. It therefore benefits decision maker and planner related to this field decide on which contour is applicable for their task.

  14. 3-D Image Analysis of Fluorescent Drug Binding

    Directory of Open Access Journals (Sweden)

    M. Raquel Miquel

    2005-01-01

    Full Text Available Fluorescent ligands provide the means of studying receptors in whole tissues using confocal laser scanning microscopy and have advantages over antibody- or non-fluorescence-based method. Confocal microscopy provides large volumes of images to be measured. Histogram analysis of 3-D image volumes is proposed as a method of graphically displaying large amounts of volumetric image data to be quickly analyzed and compared. The fluorescent ligand BODIPY FL-prazosin (QAPB was used in mouse aorta. Histogram analysis reports the amount of ligand-receptor binding under different conditions and the technique is sensitive enough to detect changes in receptor availability after antagonist incubation or genetic manipulations. QAPB binding was concentration dependent, causing concentration-related rightward shifts in the histogram. In the presence of 10 μM phenoxybenzamine (blocking agent, the QAPB (50 nM histogram overlaps the autofluorescence curve. The histogram obtained for the 1D knockout aorta lay to the left of that of control and 1B knockout aorta, indicating a reduction in 1D receptors. We have shown, for the first time, that it is possible to graphically display binding of a fluorescent drug to a biological tissue. Although our application is specific to adrenergic receptors, the general method could be applied to any volumetric, fluorescence-image-based assay.

  15. Myocardial strains from 3D displacement encoded magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Kindberg Katarina

    2012-04-01

    Full Text Available Abstract Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE, make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts.

  16. 2D-3D image registration in diagnostic and interventional X-Ray imaging

    NARCIS (Netherlands)

    Bom, I.M.J. van der

    2010-01-01

    Clinical procedures that are conventionally guided by 2D x-ray imaging, may benefit from the additional spatial information provided by 3D image data. For instance, guidance of minimally invasive procedures with CT or MRI data provides 3D spatial information and visualization of structures that are

  17. 3D Imaging for hand gesture recognition: Exploring the software-hardware interaction of current technologies

    Science.gov (United States)

    Periverzov, Frol; Ilieş, Horea T.

    2012-09-01

    Interaction with 3D information is one of the fundamental and most familiar tasks in virtually all areas of engineering and science. Several recent technological advances pave the way for developing hand gesture recognition capabilities available to all, which will lead to more intuitive and efficient 3D user interfaces (3DUI). These developments can unlock new levels of expression and productivity in all activities concerned with the creation and manipulation of virtual 3D shapes and, specifically, in engineering design. Building fully automated systems for tracking and interpreting hand gestures requires robust and efficient 3D imaging techniques as well as potent shape classifiers. We survey and explore current and emerging 3D imaging technologies, and focus, in particular, on those that can be used to build interfaces between the users' hands and the machine. The purpose of this paper is to categorize and highlight the relevant differences between these existing 3D imaging approaches in terms of the nature of the information provided, output data format, as well as the specific conditions under which these approaches yield reliable data. Furthermore we explore the impact of each of these approaches on the computational cost and reliability of the required image processing algorithms. Finally we highlight the main challenges and opportunities in developing natural user interfaces based on hand gestures, and conclude with some promising directions for future research. [Figure not available: see fulltext.

  18. Clinical Study of 3D Imaging and 3D Printing Technique for Patient-Specific Instrumentation in Total Knee Arthroplasty.

    Science.gov (United States)

    Qiu, Bing; Liu, Fei; Tang, Bensen; Deng, Biyong; Liu, Fang; Zhu, Weimin; Zhen, Dong; Xue, Mingyuan; Zhang, Mingjiao

    2017-01-25

    Patient-specific instrumentation (PSI) was designed to improve the accuracy of preoperative planning and postoperative prosthesis positioning in total knee arthroplasty (TKA). However, better understanding needs to be achieved due to the subtle nature of the PSI systems. In this study, 3D printing technique based on the image data of computed tomography (CT) has been utilized for optimal controlling of the surgical parameters. Two groups of TKA cases have been randomly selected as PSI group and control group with no significant difference of age and sex (p > 0.05). The PSI group is treated with 3D printed cutting guides whereas the control group is treated with conventional instrumentation (CI). By evaluating the proximal osteotomy amount, distal osteotomy amount, valgus angle, external rotation angle, and tibial posterior slope angle of patients, it can be found that the preoperative quantitative assessment and intraoperative changes can be controlled with PSI whereas CI is relied on experience. In terms of postoperative parameters, such as hip-knee-ankle (HKA), frontal femoral component (FFC), frontal tibial component (FTC), and lateral tibial component (LTC) angles, there is a significant improvement in achieving the desired implant position (p implantation compared against control method, which indicates potential for optimal HKA, FFC, and FTC angles.

  19. A View to the Future: A Novel Approach for 3D-3D Superimposition and Quantification of Differences for Identification from Next-Generation Video Surveillance Systems.

    Science.gov (United States)

    Gibelli, Daniele; De Angelis, Danilo; Poppa, Pasquale; Sforza, Chiarella; Cattaneo, Cristina

    2017-03-01

    Techniques of 2D-3D superimposition are widely used in cases of personal identification from video surveillance systems. However, the progressive improvement of 3D image acquisition technology will enable operators to perform also 3D-3D facial superimposition. This study aims at analyzing the possible applications of 3D-3D superimposition to personal identification, although from a theoretical point of view. Twenty subjects underwent a facial 3D scan by stereophotogrammetry twice at different time periods. Scans were superimposed two by two according to nine landmarks, and root-mean-square (RMS) value of point-to-point distances was calculated. When the two superimposed models belonged to the same individual, RMS value was 2.10 mm, while it was 4.47 mm in mismatches with a statistically significant difference (p forensic practice. © 2016 American Academy of Forensic Sciences.

  20. Single-pixel 3D imaging with time-based depth resolution

    CERN Document Server

    Sun, Ming-Jie; Gibson, Graham M; Sun, Baoqing; Radwell, Neal; Lamb, Robert; Padgett, Miles J

    2016-01-01

    Time-of-flight three dimensional imaging is an important tool for many applications, such as object recognition and remote sensing. Unlike conventional imaging approach using pixelated detector array, single-pixel imaging based on projected patterns, such as Hadamard patterns, utilises an alternative strategy to acquire information with sampling basis. Here we show a modified single-pixel camera using a pulsed illumination source and a high-speed photodiode, capable of reconstructing 128x128 pixel resolution 3D scenes to an accuracy of ~3 mm at a range of ~5 m. Furthermore, we demonstrate continuous real-time 3D video with a frame-rate up to 12 Hz. The simplicity of the system hardware could enable low-cost 3D imaging devices for precision ranging at wavelengths beyond the visible spectrum.

  1. 3D imaging of semiconductor components by discrete laminography

    Energy Technology Data Exchange (ETDEWEB)

    Batenburg, K. J. [Centrum Wiskunde and Informatica, P.O. Box 94079, NL-1090 GB Amsterdam, The Netherlands and iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Palenstijn, W. J.; Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2014-06-19

    X-ray laminography is a powerful technique for quality control of semiconductor components. Despite the advantages of nondestructive 3D imaging over 2D techniques based on sectioning, the acquisition time is still a major obstacle for practical use of the technique. In this paper, we consider the application of Discrete Tomography to laminography data, which can potentially reduce the scanning time while still maintaining a high reconstruction quality. By incorporating prior knowledge in the reconstruction algorithm about the materials present in the scanned object, far more accurate reconstructions can be obtained from the same measured data compared to classical reconstruction methods. We present a series of simulation experiments that illustrate the potential of the approach.

  2. Automatic airline baggage counting using 3D image segmentation

    Science.gov (United States)

    Yin, Deyu; Gao, Qingji; Luo, Qijun

    2017-06-01

    The baggage number needs to be checked automatically during baggage self-check-in. A fast airline baggage counting method is proposed in this paper using image segmentation based on height map which is projected by scanned baggage 3D point cloud. There is height drop in actual edge of baggage so that it can be detected by the edge detection operator. And then closed edge chains are formed from edge lines that is linked by morphological processing. Finally, the number of connected regions segmented by closed chains is taken as the baggage number. Multi-bag experiment that is performed on the condition of different placement modes proves the validity of the method.

  3. Unsupervised fuzzy segmentation of 3D magnetic resonance brain images

    Science.gov (United States)

    Velthuizen, Robert P.; Hall, Lawrence O.; Clarke, Laurence P.; Bensaid, Amine M.; Arrington, J. A.; Silbiger, Martin L.

    1993-07-01

    Unsupervised fuzzy methods are proposed for segmentation of 3D Magnetic Resonance images of the brain. Fuzzy c-means (FCM) has shown promising results for segmentation of single slices. FCM has been investigated for volume segmentations, both by combining results of single slices and by segmenting the full volume. Different strategies and initializations have been tried. In particular, two approaches have been used: (1) a method by which, iteratively, the furthest sample is split off to form a new cluster center, and (2) the traditional FCM in which the membership grade matrix is initialized in some way. Results have been compared with volume segmentations by k-means and with two supervised methods, k-nearest neighbors and region growing. Results of individual segmentations are presented as well as comparisons on the application of the different methods to a number of tumor patient data sets.

  4. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available.

  5. Multi-camera system for 3D forensic documentation.

    Science.gov (United States)

    Leipner, Anja; Baumeister, Rilana; Thali, Michael J; Braun, Marcel; Dobler, Erika; Ebert, Lars C

    2016-04-01

    Three-dimensional (3D) surface documentation is well established in forensic documentation. The most common systems include laser scanners and surface scanners with optical 3D cameras. An additional documentation tool is photogrammetry. This article introduces the botscan© (botspot GmbH, Berlin, Germany) multi-camera system for the forensic markerless photogrammetric whole body 3D surface documentation of living persons in standing posture. We used the botscan© multi-camera system to document a person in 360°. The system has a modular design and works with 64 digital single-lens reflex (DSLR) cameras. The cameras were evenly distributed in a circular chamber. We generated 3D models from the photographs using the PhotoScan© (Agisoft LLC, St. Petersburg, Russia) software. Our results revealed that the botscan© and PhotoScan© produced 360° 3D models with detailed textures. The 3D models had very accurate geometries and could be scaled to full size with the help of scale bars. In conclusion, this multi-camera system provided a rapid and simple method for documenting the whole body of a person to generate 3D data with Photoscan©.

  6. A real-time ultrasonic field mapping system using a Fabry Pérot single pixel camera for 3D photoacoustic imaging

    Science.gov (United States)

    Huynh, Nam; Zhang, Edward; Betcke, Marta; Arridge, Simon R.; Beard, Paul; Cox, Ben

    2015-03-01

    A system for dynamic mapping of broadband ultrasound fields has been designed, with high frame rate photoacoustic imaging in mind. A Fabry-Pérot interferometric ultrasound sensor was interrogated using a coherent light single-pixel camera. Scrambled Hadamard measurement patterns were used to sample the acoustic field at the sensor, and either a fast Hadamard transform or a compressed sensing reconstruction algorithm were used to recover the acoustic pressure data. Frame rates of 80 Hz were achieved for 32x32 images even though no specialist hardware was used for the on-the-fly reconstructions. The ability of the system to obtain photocacoustic images with data compressions as low as 10% was also demonstrated.

  7. GPU-accelerated denoising of 3D magnetic resonance images

    Energy Technology Data Exchange (ETDEWEB)

    Howison, Mark; Wes Bethel, E.

    2014-05-29

    The raw computational power of GPU accelerators enables fast denoising of 3D MR images using bilateral filtering, anisotropic diffusion, and non-local means. In practice, applying these filtering operations requires setting multiple parameters. This study was designed to provide better guidance to practitioners for choosing the most appropriate parameters by answering two questions: what parameters yield the best denoising results in practice? And what tuning is necessary to achieve optimal performance on a modern GPU? To answer the first question, we use two different metrics, mean squared error (MSE) and mean structural similarity (MSSIM), to compare denoising quality against a reference image. Surprisingly, the best improvement in structural similarity with the bilateral filter is achieved with a small stencil size that lies within the range of real-time execution on an NVIDIA Tesla M2050 GPU. Moreover, inappropriate choices for parameters, especially scaling parameters, can yield very poor denoising performance. To answer the second question, we perform an autotuning study to empirically determine optimal memory tiling on the GPU. The variation in these results suggests that such tuning is an essential step in achieving real-time performance. These results have important implications for the real-time application of denoising to MR images in clinical settings that require fast turn-around times.

  8. Spectral ladar: towards active 3D multispectral imaging

    Science.gov (United States)

    Powers, Michael A.; Davis, Christopher C.

    2010-04-01

    In this paper we present our Spectral LADAR concept, an augmented implementation of traditional LADAR. This sensor uses a polychromatic source to obtain range-resolved 3D spectral images which are used to identify objects based on combined spatial and spectral features, resolving positions in three dimensions and up to hundreds of meters in distance. We report on a proof-of-concept Spectral LADAR demonstrator that generates spectral point clouds from static scenes. The demonstrator transmits nanosecond supercontinuum pulses generated in a photonic crystal fiber. Currently we use a rapidly tuned receiver with a high-speed InGaAs APD for 25 spectral bands with the future expectation of implementing a linear APD array spectrograph. Each spectral band is independently range resolved with multiple return pulse recognition. This is a critical feature, enabling simultaneous spectral and spatial unmixing of partially obscured objects when not achievable using image fusion of monochromatic LADAR and passive spectral imagers. This enables higher identification confidence in highly cluttered environments such as forested or urban areas (e.g. vehicles behind camouflage or foliage). These environments present challenges for situational awareness and robotic perception which can benefit from the unique attributes of Spectral LADAR. Results from this demonstrator unit are presented for scenes typical of military operations and characterize the operation of the device. The results are discussed here in the context of autonomous vehicle navigation and target recognition.

  9. Silhouette-based approach of 3D image reconstruction for automated image acquisition using robotic arm

    Science.gov (United States)

    Azhar, N.; Saad, W. H. M.; Manap, N. A.; Saad, N. M.; Syafeeza, A. R.

    2017-06-01

    This study presents the approach of 3D image reconstruction using an autonomous robotic arm for the image acquisition process. A low cost of the automated imaging platform is created using a pair of G15 servo motor connected in series to an Arduino UNO as a main microcontroller. Two sets of sequential images were obtained using different projection angle of the camera. The silhouette-based approach is used in this study for 3D reconstruction from the sequential images captured from several different angles of the object. Other than that, an analysis based on the effect of different number of sequential images on the accuracy of 3D model reconstruction was also carried out with a fixed projection angle of the camera. The effecting elements in the 3D reconstruction are discussed and the overall result of the analysis is concluded according to the prototype of imaging platform.

  10. High resolution 3D imaging of synchrotron generated microbeams

    Energy Technology Data Exchange (ETDEWEB)

    Gagliardi, Frank M., E-mail: frank.gagliardi@wbrc.org.au [Alfred Health Radiation Oncology, The Alfred, Melbourne, Victoria 3004, Australia and School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia); Cornelius, Iwan [Imaging and Medical Beamline, Australian Synchrotron, Clayton, Victoria 3168, Australia and Centre for Medical Radiation Physics, University of Wollongong, Wollongong, New South Wales 2500 (Australia); Blencowe, Anton [Division of Health Sciences, School of Pharmacy and Medical Sciences, The University of South Australia, Adelaide, South Australia 5000, Australia and Division of Information Technology, Engineering and the Environment, Mawson Institute, University of South Australia, Mawson Lakes, South Australia 5095 (Australia); Franich, Rick D. [School of Applied Sciences and Health Innovations Research Institute, RMIT University, Melbourne, Victoria 3000 (Australia); Geso, Moshi [School of Medical Sciences, RMIT University, Bundoora, Victoria 3083 (Australia)

    2015-12-15

    Purpose: Microbeam radiation therapy (MRT) techniques are under investigation at synchrotrons worldwide. Favourable outcomes from animal and cell culture studies have proven the efficacy of MRT. The aim of MRT researchers currently is to progress to human clinical trials in the near future. The purpose of this study was to demonstrate the high resolution and 3D imaging of synchrotron generated microbeams in PRESAGE® dosimeters using laser fluorescence confocal microscopy. Methods: Water equivalent PRESAGE® dosimeters were fabricated and irradiated with microbeams on the Imaging and Medical Beamline at the Australian Synchrotron. Microbeam arrays comprised of microbeams 25–50 μm wide with 200 or 400 μm peak-to-peak spacing were delivered as single, cross-fire, multidirectional, and interspersed arrays. Imaging of the dosimeters was performed using a NIKON A1 laser fluorescence confocal microscope. Results: The spatial fractionation of the MRT beams was clearly visible in 2D and up to 9 mm in depth. Individual microbeams were easily resolved with the full width at half maximum of microbeams measured on images with resolutions of as low as 0.09 μm/pixel. Profiles obtained demonstrated the change of the peak-to-valley dose ratio for interspersed MRT microbeam arrays and subtle variations in the sample positioning by the sample stage goniometer were measured. Conclusions: Laser fluorescence confocal microscopy of MRT irradiated PRESAGE® dosimeters has been validated in this study as a high resolution imaging tool for the independent spatial and geometrical verification of MRT beam delivery.

  11. A combined system for 3D printing cybersecurity

    Science.gov (United States)

    Straub, Jeremy

    2017-06-01

    Previous work has discussed the impact of cybersecurity breaches on 3D printed objects. Multiple attack types that could weaken objects, make them unsuitable for certain applications and even create safety hazards have been presented. This paper considers a visible light sensing-based verification system's efficacy as a means of thwarting cybersecurity threats to 3D printing. This system detects discrepancies between expected and actual printed objects (based on an independent pristine CAD model). Whether reliance on an independent CAD model is appropriate is also considered. The future of 3D printing is projected and the importance of cybersecurity in this future is discussed.

  12. NoSQL Based 3D City Model Management System

    Science.gov (United States)

    Mao, B.; Harrie, L.; Cao, J.; Wu, Z.; Shen, J.

    2014-04-01

    To manage increasingly complicated 3D city models, a framework based on NoSQL database is proposed in this paper. The framework supports import and export of 3D city model according to international standards such as CityGML, KML/COLLADA and X3D. We also suggest and implement 3D model analysis and visualization in the framework. For city model analysis, 3D geometry data and semantic information (such as name, height, area, price and so on) are stored and processed separately. We use a Map-Reduce method to deal with the 3D geometry data since it is more complex, while the semantic analysis is mainly based on database query operation. For visualization, a multiple 3D city representation structure CityTree is implemented within the framework to support dynamic LODs based on user viewpoint. Also, the proposed framework is easily extensible and supports geoindexes to speed up the querying. Our experimental results show that the proposed 3D city management system can efficiently fulfil the analysis and visualization requirements.

  13. Subpixel Target Location Techniques for 3-D Coordinate Measuring System

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The close photogrammetric 3-D coordinate measurement is a new measuring technology in the fields of the coordinate measurement machine (CMM) in recent years. In this method, we usually place some targets on the measured object and take image of targets to determine the object coordinate. The subpixel location of target image plays an important role in high accuracy 3-D coordinate measuring procedure. In this paper, some subpixel location methods are reviewed and some factors which affect location precision are analyzed. Then we propose bilinear interpolation centroid algorithm. The experiments have shown this algorithm can improve accuracy of target centroid by increasing available pixels.

  14. Thermal Protection System Materials (TPSM): 3D MAT Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The 3D MAT Project seeks to design and develop a game changing Woven Thermal Protection System (TPS) technology tailored to meet the needs of the Orion Multi-Purpose...

  15. Quasi-3D electron cyclotron emission imaging on J-TEXT

    Science.gov (United States)

    Zhao, Zhenling; Zhu, Yilun; Tong, Li; Xie, Jinlin; Liu, Wandong; Yu, Changxuan; Yang, Zhoujun; Zhuang, Ge; Luhmann, N. C., Jr.; Domier, C. W.

    2017-09-01

    Electron cyclotron emission imaging (ECEI) can provide measurements of 2D electron temperature fluctuation with high temporal and spatial resolution in magnetic fusion plasma devices. Two ECEI systems located in different toroidal ports with 67.5 degree separation have been implemented on J-TEXT to study the 3D structure of magnetohydrodynamic (MHD) instabilities. Each system consists of 12 (vertical) × 16 (horizontal) = 192 channels and the image of the 2nd harmonic X-mode electron cyclotron emission can be captured continuously in the core plasma region. The field curvature adjustment lens concept is developed to control the imaging plane for receiving optics of the ECEI systems. Field curvature of the image can be controlled to match the emission layer. Consequently, a quasi-3D image of the MHD instability in the core of the plasma has been achieved.

  16. A fast 3D reconstruction system with a low-cost camera accessory.

    Science.gov (United States)

    Zhang, Yiwei; Gibson, Graham M; Hay, Rebecca; Bowman, Richard W; Padgett, Miles J; Edgar, Matthew P

    2015-06-09

    Photometric stereo is a three dimensional (3D) imaging technique that uses multiple 2D images, obtained from a fixed camera perspective, with different illumination directions. Compared to other 3D imaging methods such as geometry modeling and 3D-scanning, it comes with a number of advantages, such as having a simple and efficient reconstruction routine. In this work, we describe a low-cost accessory to a commercial digital single-lens reflex (DSLR) camera system allowing fast reconstruction of 3D objects using photometric stereo. The accessory consists of four white LED lights fixed to the lens of a commercial DSLR camera and a USB programmable controller board to sequentially control the illumination. 3D images are derived for different objects with varying geometric complexity and results are presented, showing a typical height error of <3 mm for a 50 mm sized object.

  17. 3D FACE RECOGNITION FROM RANGE IMAGES BASED ON CURVATURE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Suranjan Ganguly

    2014-02-01

    Full Text Available In this paper, we present a novel approach for three-dimensional face recognition by extracting the curvature maps from range images. There are four types of curvature maps: Gaussian, Mean, Maximum and Minimum curvature maps. These curvature maps are used as a feature for 3D face recognition purpose. The dimension of these feature vectors is reduced using Singular Value Decomposition (SVD technique. Now from calculated three components of SVD, the non-negative values of ‘S’ part of SVD is ranked and used as feature vector. In this proposed method, two pair-wise curvature computations are done. One is Mean, and Maximum curvature pair and another is Gaussian and Mean curvature pair. These are used to compare the result for better recognition rate. This automated 3D face recognition system is focused in different directions like, frontal pose with expression and illumination variation, frontal face along with registered face, only registered face and registered face from different pose orientation across X, Y and Z axes. 3D face images used for this research work are taken from FRAV3D database. The pose variation of 3D facial image is being registered to frontal pose by applying one to all registration technique then curvature mapping is applied on registered face images along with remaining frontal face images. For the classification and recognition purpose five layer feed-forward back propagation neural network classifiers is used, and the corresponding result is discussed in section 4.

  18. 3D terahertz synthetic aperture imaging of objects with arbitrary boundaries

    Science.gov (United States)

    Kniffin, G. P.; Zurk, L. M.; Schecklman, S.; Henry, S. C.

    2013-09-01

    Terahertz (THz) imaging has shown promise for nondestructive evaluation (NDE) of a wide variety of manufactured products including integrated circuits and pharmaceutical tablets. Its ability to penetrate many non-polar dielectrics allows tomographic imaging of an object's 3D structure. In NDE applications, the material properties of the target(s) and background media are often well-known a priori and the objective is to identify the presence and/or 3D location of structures or defects within. The authors' earlier work demonstrated the ability to produce accurate 3D images of conductive targets embedded within a high-density polyethylene (HDPE) background. That work assumed a priori knowledge of the refractive index of the HDPE as well as the physical location of the planar air-HDPE boundary. However, many objects of interest exhibit non-planar interfaces, such as varying degrees of curvature over the extent of the surface. Such irregular boundaries introduce refraction effects and other artifacts that distort 3D tomographic images. In this work, two reconstruction techniques are applied to THz synthetic aperture tomography; a holographic reconstruction method that accurately detects the 3D location of an object's irregular boundaries, and a split­-step Fourier algorithm that corrects the artifacts introduced by the surface irregularities. The methods are demonstrated with measurements from a THz time-domain imaging system.

  19. JAtlasView: a Java atlas-viewer for browsing biomedical 3D images and atlases

    Directory of Open Access Journals (Sweden)

    Scott Mark

    2005-03-01

    Full Text Available Abstract Background Many three-dimensional (3D images are routinely collected in biomedical research and a number of digital atlases with associated anatomical and other information have been published. A number of tools are available for viewing this data ranging from commercial visualization packages to freely available, typically system architecture dependent, solutions. Here we discuss an atlas viewer implemented to run on any workstation using the architecture neutral Java programming language. Results We report the development of a freely available Java based viewer for 3D image data, descibe the structure and functionality of the viewer and how automated tools can be developed to manage the Java Native Interface code. The viewer allows arbitrary re-sectioning of the data and interactive browsing through the volume. With appropriately formatted data, for example as provided for the Electronic Atlas of the Developing Human Brain, a 3D surface view and anatomical browsing is available. The interface is developed in Java with Java3D providing the 3D rendering. For efficiency the image data is manipulated using the Woolz image-processing library provided as a dynamically linked module for each machine architecture. Conclusion We conclude that Java provides an appropriate environment for efficient development of these tools and techniques exist to allow computationally efficient image-processing libraries to be integrated relatively easily.

  20. The 3D scanner prototype utilize object profile imaging using line laser and octave software

    Science.gov (United States)

    Nurdini, Mugi; Manunggal, Trikarsa Tirtadwipa; Samsi, Agus

    2016-11-01

    Three-dimensional scanner or 3D Scanner is a device to reconstruct the real object into digital form on a computer. 3D Scanner is a technology that is being developed, especially in developed countries, where the current 3D Scanner devices is the advanced version with a very expensive prices. This study is basically a simple prototype of 3D Scanner with a very low investment costs. 3D Scanner prototype device consists of a webcam, a rotating desk system controlled by a stepper motor and Arduino UNO, and a line laser. Objects that limit the research is the object with same radius from its center point (object pivot). Scanning is performed by using object profile imaging by line laser which is then captured by the camera and processed by a computer (image processing) using Octave software. On each image acquisition, the scanned object on a rotating desk rotated by a certain degree, so for one full turn multiple images of a number of existing side are finally obtained. Then, the profile of the entire images is extracted in order to obtain digital object dimension. Digital dimension is calibrated by length standard, called gage block. Overall dimensions are then digitally reconstructed into a three-dimensional object. Validation of the scanned object reconstruction of the original object dimensions expressed as a percentage error. Based on the results of data validation, horizontal dimension error is about 5% to 23% and vertical dimension error is about +/- 3%.

  1. Integration of Video Images and CAD Wireframes for 3d Object Localization

    Science.gov (United States)

    Persad, R. A.; Armenakis, C.; Sohn, G.

    2012-07-01

    The tracking of moving objects from single images has received widespread attention in photogrammetric computer vision and considered to be at a state of maturity. This paper presents a model-driven solution for localizing moving objects detected from monocular, rotating and zooming video images in a 3D reference frame. To realize such a system, the recovery of 2D to 3D projection parameters is essential. Automatic estimation of these parameters is critical, particularly for pan-tilt-zoom (PTZ) surveillance cameras where parameters change spontaneously upon camera motion. In this work, an algorithm for automated parameter retrieval is proposed. This is achieved by matching linear features between incoming images from video sequences and simple geometric 3D CAD wireframe models of man-made structures. The feature matching schema uses a hypothesis-verify optimization framework referred to as LR-RANSAC. This novel method improves the computational efficiency of the matching process in comparison to the standard RANSAC robust estimator. To demonstrate the applicability and performance of the method, experiments have been performed on indoor and outdoor image sequences under varying conditions with lighting changes and occlusions. Reliability of the matching algorithm has been analyzed by comparing the automatically determined camera parameters with ground truth (GT). Dependability of the retrieved parameters for 3D localization has also been assessed by comparing the difference between 3D positions of moving image objects estimated using the LR-RANSAC-derived parameters and those computed using GT parameters.

  2. Digital holographic microscopy for imaging growth and treatment response in 3D tumor models

    Science.gov (United States)

    Li, Yuyu; Petrovic, Ljubica; Celli, Jonathan P.; Yelleswarapu, Chandra S.

    2014-03-01

    While three-dimensional tumor models have emerged as valuable tools in cancer research, the ability to longitudinally visualize the 3D tumor architecture restored by these systems is limited with microscopy techniques that provide only qualitative insight into sample depth, or which require terminal fixation for depth-resolved 3D imaging. Here we report the use of digital holographic microscopy (DHM) as a viable microscopy approach for quantitative, non-destructive longitudinal imaging of in vitro 3D tumor models. Following established methods we prepared 3D cultures of pancreatic cancer cells in overlay geometry on extracellular matrix beds and obtained digital holograms at multiple timepoints throughout the duration of growth. The holograms were digitally processed and the unwrapped phase images were obtained to quantify nodule thickness over time under normal growth, and in cultures subject to chemotherapy treatment. In this manner total nodule volumes are rapidly estimated and demonstrated here to show contrasting time dependent changes during growth and in response to treatment. This work suggests the utility of DHM to quantify changes in 3D structure over time and suggests the further development of this approach for time-lapse monitoring of 3D morphological changes during growth and in response to treatment that would otherwise be impractical to visualize.

  3. ENHANCING CLOSE-UP IMAGE BASED 3D DIGITISATION WITH FOCUS STACKING

    Directory of Open Access Journals (Sweden)

    G. Kontogianni

    2017-08-01

    Full Text Available The 3D digitisation of small artefacts is a very complicated procedure because of their complex morphological feature structures, concavities, rich decorations, high frequency of colour changes in texture, increased accuracy requirements etc. Image-based methods present a low cost, fast and effective alternative because laser scanning does not meet the accuracy requirements in general. A shallow Depth of Field (DoF affects the image-based 3D reconstruction and especially the point matching procedure. This is visible not only in the total number of corresponding points but also in the resolution of the produced 3D model. The extension of the DoF is a very important task that should be incorporated in the data collection to attain a better quality of the image set and a better 3D model. An extension of the DoF can be achieved with many methods and especially with the use of the focus stacking technique. In this paper, the focus stacking technique was tested in a real-world experiment to digitise a museum artefact in 3D. The experiment conditions include the use of a full frame camera equipped with a normal lens (50mm, with the camera being placed close to the object. The artefact has already been digitised with a structured light system and that model served as the reference model in which 3D models were compared and the results were presented.

  4. 3D texture analysis in renal cell carcinoma tissue image grading.

    Science.gov (United States)

    Kim, Tae-Yun; Cho, Nam-Hoon; Jeong, Goo-Bo; Bengtsson, Ewert; Choi, Heung-Kook

    2014-01-01

    One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM) and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system.

  5. 3D Texture Analysis in Renal Cell Carcinoma Tissue Image Grading

    Directory of Open Access Journals (Sweden)

    Tae-Yun Kim

    2014-01-01

    Full Text Available One of the most significant processes in cancer cell and tissue image analysis is the efficient extraction of features for grading purposes. This research applied two types of three-dimensional texture analysis methods to the extraction of feature values from renal cell carcinoma tissue images, and then evaluated the validity of the methods statistically through grade classification. First, we used a confocal laser scanning microscope to obtain image slices of four grades of renal cell carcinoma, which were then reconstructed into 3D volumes. Next, we extracted quantitative values using a 3D gray level cooccurrence matrix (GLCM and a 3D wavelet based on two types of basis functions. To evaluate their validity, we predefined 6 different statistical classifiers and applied these to the extracted feature sets. In the grade classification results, 3D Haar wavelet texture features combined with principal component analysis showed the best discrimination results. Classification using 3D wavelet texture features was significantly better than 3D GLCM, suggesting that the former has potential for use in a computer-based grading system.

  6. 3D imaging with an isocentric mobile C-arm. Comparison of image quality with spiral CT

    Energy Technology Data Exchange (ETDEWEB)

    Kotsianos, Dorothea; Wirth, Stefan; Fischer, Tanja; Euler, Ekkehard; Rock, Clemens; Linsenmaier, Ulrich; Pfeifer, Klaus Juergen; Reiser, Maximilian [Departments of Radiology and Surgery, Klinikum der Universitaet Muenchen, Innenstadt, Nussbaumstrasse 20, 80336, Munchen (Germany)

    2004-09-01

    The purpose of this study was to evaluate the image quality of the new 3D imaging system (ISO-C-3D) for osteosyntheses of tibial condylar fractures in comparison with spiral CT (CT). Sixteen human cadaveric knees were examined with a C-arm 3D imaging system and spiral computed tomography. Various screws and plates of steel and titanium were used for osteosynthesis in these specimens. Image quality and clinical value of multiplanar (MP) reformatting of both methods were analyzed. In addition, five patients with tibial condylar fractures were examined for diagnosis and intra-operative control. The image quality of the C-arm 3D imaging system in the cadaveric study was rated as significantly worse than that of spiral CT with and without prostheses. After implantation of prostheses an increased incidence of artifacts was observed, but the diagnostic accuracy was not affected. Titanium implants caused the smallest number of artifacts. The image quality of ISO-C is inferior to CT, and metal artifacts were more prominent, but the clinical value was equal. ISO-C-3D can be useful in planning operative reconstructions and can verify the reconstruction of articular surfaces and the position of implants with diagnostic image quality. (orig.)

  7. 3D imaging with an isocentric mobile C-arm comparison of image quality with spiral CT.

    Science.gov (United States)

    Kotsianos, Dorothea; Wirth, Stefan; Fischer, Tanja; Euler, Ekkehard; Rock, Clemens; Linsenmaier, Ulrich; Pfeifer, Klaus Jürgen; Reiser, Maximilian

    2004-09-01

    The purpose of this study was to evaluate the image quality of the new 3D imaging system (ISO-C-3D) for osteosyntheses of tibial condylar fractures in comparison with spiral CT (CT). Sixteen human cadaveric knees were examined with a C-arm 3D imaging system and spiral computed tomography. Various screws and plates of steel and titanium were used for osteosynthesis in these specimens. Image quality and clinical value of multiplanar (MP) reformatting of both methods were analyzed. In addition, five patients with tibial condylar fractures were examined for diagnosis and intra-operative control. The image quality of the C-arm 3D imaging system in the cadaveric study was rated as significantly worse than that of spiral CT with and without prostheses. After implantation of prostheses an increased incidence of artifacts was observed, but the diagnostic accuracy was not affected. Titanium implants caused the smallest number of artifacts. The image quality of ISO-C is inferior to CT, and metal artifacts were more prominent, but the clinical value was equal. ISO-C-3D can be useful in planning operative reconstructions and can verify the reconstruction of articular surfaces and the position of implants with diagnostic image quality.

  8. 3D Seismic Imaging over a Potential Collapse Structure

    Science.gov (United States)

    Gritto, Roland; O'Connell, Daniel; Elobaid Elnaiem, Ali; Mohamed, Fathelrahman; Sadooni, Fadhil

    2016-04-01

    The Middle-East has seen a recent boom in construction including the planning and development of complete new sub-sections of metropolitan areas. Before planning and construction can commence, however, the development areas need to be investigated to determine their suitability for the planned project. Subsurface parameters such as the type of material (soil/rock), thickness of top soil or rock layers, depth and elastic parameters of basement, for example, comprise important information needed before a decision concerning the suitability of the site for construction can be made. A similar problem arises in environmental impact studies, when subsurface parameters are needed to assess the geological heterogeneity of the subsurface. Environmental impact studies are typically required for each construction project, particularly for the scale of the aforementioned building boom in the Middle East. The current study was conducted in Qatar at the location of a future highway interchange to evaluate a suite of 3D seismic techniques in their effectiveness to interrogate the subsurface for the presence of karst-like collapse structures. The survey comprised an area of approximately 10,000 m2 and consisted of 550 source- and 192 receiver locations. The seismic source was an accelerated weight drop while the geophones consisted of 3-component 10 Hz velocity sensors. At present, we analyzed over 100,000 P-wave phase arrivals and performed high-resolution 3-D tomographic imaging of the shallow subsurface. Furthermore, dispersion analysis of recorded surface waves will be performed to obtain S-wave velocity profiles of the subsurface. Both results, in conjunction with density estimates, will be utilized to determine the elastic moduli of the subsurface rock layers.

  9. A Novel Volumetric 3D Display System with Static Screen Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The physical world around us is three-dimensional (3D), yet most existing display systems with flat screens can handle only two-dimensional (2D) flat images that...

  10. 3D Visualization System for Tracking and Identification of Objects Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Photon-X has developed a proprietary EO spatial phase technology that can passively collect 3-D images in real-time using a single camera-based system. This...

  11. Enhanced 3D fluorescence live cell imaging on nanoplasmonic substrate

    Energy Technology Data Exchange (ETDEWEB)

    Gartia, Manas Ranjan [Department of Nuclear, Plasma and Radiological Engineering, University of Illinois, Urbana, IL 61801 (United States); Hsiao, Austin; Logan Liu, G [Department of Bioengineering, University of Illinois, Urbana, IL 61801 (United States); Sivaguru, Mayandi [Institute for Genomic Biology, University of Illinois, Urbana, IL 61801 (United States); Chen Yi, E-mail: loganliu@illinois.edu [Department of Electrical and Computer Engineering, University of Illinois, Urbana, IL 61801 (United States)

    2011-09-07

    We have created a randomly distributed nanocone substrate on silicon coated with silver for surface-plasmon-enhanced fluorescence detection and 3D cell imaging. Optical characterization of the nanocone substrate showed it can support several plasmonic modes (in the 300-800 nm wavelength range) that can be coupled to a fluorophore on the surface of the substrate, which gives rise to the enhanced fluorescence. Spectral analysis suggests that a nanocone substrate can create more excitons and shorter lifetime in the model fluorophore Rhodamine 6G (R6G) due to plasmon resonance energy transfer from the nanocone substrate to the nearby fluorophore. We observed three-dimensional fluorescence enhancement on our substrate shown from the confocal fluorescence imaging of chinese hamster ovary (CHO) cells grown on the substrate. The fluorescence intensity from the fluorophores bound on the cell membrane was amplified more than 100-fold as compared to that on a glass substrate. We believe that strong scattering within the nanostructured area coupled with random scattering inside the cell resulted in the observed three-dimensional enhancement in fluorescence with higher photostability on the substrate surface.

  12. Multiframe image point matching and 3-d surface reconstruction.

    Science.gov (United States)

    Tsai, R Y

    1983-02-01

    This paper presents two new methods, the Joint Moment Method (JMM) and the Window Variance Method (WVM), for image matching and 3-D object surface reconstruction using multiple perspective views. The viewing positions and orientations for these perspective views are known a priori, as is usually the case for such applications as robotics and industrial vision as well as close range photogrammetry. Like the conventional two-frame correlation method, the JMM and WVM require finding the extrema of 1-D curves, which are proved to theoretically approach a delta function exponentially as the number of frames increases for the JMM and are much sharper than the two-frame correlation function for both the JMM and the WVM, even when the image point to be matched cannot be easily distinguished from some of the other points. The theoretical findings have been supported by simulations. It is also proved that JMM and WVM are not sensitive to certain radiometric effects. If the same window size is used, the computational complexity for the proposed methods is about n - 1 times that for the two-frame method where n is the number of frames. Simulation results show that the JMM and WVM require smaller windows than the two-frame correlation method with better accuracy, and therefore may even be more computationally feasible than the latter since the computational complexity increases quadratically as a function of the window size.

  13. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Chen, G [University of Wisconsin, Madison, WI (United States); Pan, X [University Chicago, Chicago, IL (United States); Stayman, J [Johns Hopkins University, Baltimore, MD (United States); Samei, E [Duke University Medical Center, Durham, NC (United States)

    2014-06-15

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  14. Adaptive optofluidic lens(es) for switchable 2D and 3D imaging

    Science.gov (United States)

    Huang, Hanyang; Wei, Kang; Zhao, Yi

    2016-03-01

    The stereoscopic image is often captured using dual cameras arranged side-by-side and optical path switching systems such as two separate solid lenses or biprism/mirrors. The miniaturization of the overall size of current stereoscopic devices down to several millimeters is at a sacrifice of further device size shrinkage. The limited light entry worsens the final image resolution and brightness. It is known that optofluidics offer good re-configurability for imaging systems. Leveraging this technique, we report a reconfigurable optofluidic system whose optical layout can be swapped between a singlet lens with 10 mm in diameter and a pair of binocular lenses with each lens of 3 mm in diameter for switchable two-dimensional (2D) and three-dimensional (3D) imaging. The singlet and the binoculars share the same optical path and the same imaging sensor. The singlet acquires a 3D image with better resolution and brightness, while the binoculars capture stereoscopic image pairs for 3D vision and depth perception. The focusing power tuning capability of the singlet and the binoculars enable image acquisition at varied object planes by adjusting the hydrostatic pressure across the lens membrane. The vari-focal singlet and binoculars thus work interchangeably and complementarily. The device is thus expected to have applications in robotic vision, stereoscopy, laparoendoscopy and miniaturized zoom lens system.

  15. Extraction of depth information for 3D imaging using pixel aperture technique

    Science.gov (United States)

    Choi, Byoung-Soo; Bae, Myunghan; Kim, Sang-Hwan; Lee, Jimin; Oh, Chang-Woo; Chang, Seunghyuk; Park, JongHo; Lee, Sang-Jin; Shin, Jang-Kyoo

    2017-02-01

    A 3dimensional (3D) imaging is an important area which can be applied to face detection, gesture recognition, and 3D reconstruction. In this paper, extraction of depth information for 3D imaging using pixel aperture technique is presented. An active pixel sensor (APS) with in-pixel aperture has been developed for this purpose. In the conventional camera systems using a complementary metal-oxide-semiconductor (CMOS) image sensor, an aperture is located behind the camera lens. However, in our proposed camera system, the aperture implemented by metal layer of CMOS process is located on the White (W) pixel which means a pixel without any color filter on top of the pixel. 4 types of pixels including Red (R), Green (G), Blue (B), and White (W) pixels were used for pixel aperture technique. The RGB pixels produce a defocused image with blur, while W pixels produce a focused image. The focused image is used as a reference image to extract the depth information for 3D imaging. This image can be compared with the defocused image from RGB pixels. Therefore, depth information can be extracted by comparing defocused image with focused image using the depth from defocus (DFD) method. Size of the pixel for 4-tr APS is 2.8 μm × 2.8 μm and the pixel structure was designed and simulated based on 0.11 μm CMOS image sensor (CIS) process. Optical performances of the pixel aperture technique were evaluated using optical simulation with finite-difference time-domain (FDTD) method and electrical performances were evaluated using TCAD.

  16. Preliminary clinical application of contrast-enhanced MR angiography using 3D time-resolved imaging of contrast kinetics(3D-TRICKS)

    Institute of Scientific and Technical Information of China (English)

    YANG Chun-shan; LIU Shi-yuan; XIAO Xiang-sheng; FENG Yun; LI Hui-min; XIAO Shan; GONG Wan-qing

    2007-01-01

    Objective: To introduce a new better contrast-enhanced MR angiographic method, named 3D time-resolved imaging of contrast kinetics (3D-TRICKS). Methods: TRICKS is a high temporal resolution (2-6 s) MR angiographic technique using a short TR(4 ms) and TE(1.5 ms), partial echo sampling, in which central part of k-space is updated more frequently than the peripheral part. TRICKS pre-contrast mask 3D images are firstly scanned, and then the bolus injecting of Gd-DTPA, 15-20 sequential 3D images are acquired. The reconstructed 3D images, subtraction of contrast 3D images with mask images, are conceptually similar to a catheter-based intra-arterial digital subtraction angiographic series(DSA). Thirty patients underwent contrast-enhanced MR angiography using 3D-TRICKS. Results: Totally 12 vertebral arteries were well displayed on TRICKS, in which 7 were normal, 1 demonstrated bilateral vertebral artery stenosis, 4 had unilateral vertebral artery stenosis and 1 was accompanied with the same lateral carotid artery bifurcation stenosis. Four cases of bilateral renal arteries were normal, 1 transplanted kidney artery showed as normal and 1 transplanted kidney artery showed stenosis. 2 cerebral arteries were normal, 1 had sagittal sinus thrombosis and 1 displayed intracranial arteriovenous malformation. 3 pulmonary arteries were normal, 1 showed pulmonary artery thrombosis and 1 revealed pulmonary sequestration's abnormal feeding artery and draining vein. One left lower limb fibrolipoma showed feeding artery. One displayed radial-ulnar artery artificial fistula stenosis. One revealed left antebrachium hemangioma. Conclusion: TRICKS can clearly delineate most body vascular system and reveal most vascular abnormality. It possesses convenience and high successful rate, which make it the first choice of displaying most vascular abnormality.

  17. Intraoperative 3D Ultrasonography for Image-Guided Neurosurgery

    NARCIS (Netherlands)

    Letteboer, Marloes Maria Johanna

    2004-01-01

    Stereotactic neurosurgery has evolved dramatically in recent years from the original rigid frame-based systems to the current frameless image-guided systems, which allow greater flexibility while maintaining sufficient accuracy. As these systems continue to evolve, more applications are found, and i

  18. A 3D reconstruction from real-time stereoscopic images using GPU

    OpenAIRE

    Gomez-Balderas, Jose-Ernesto; Houzet, Dominique

    2013-01-01

    IEEE Xplore Compliant Files 979-10-92279-01-6; International audience; In this article we propose a new technique to obtain a three-dimensional (3D) reconstruction from stereoscopic images taken by a stereoscopic system in real-time. To parallelize the 3D reconstruction we propose a method that uses a Graphics Processors Unit (GPU) and a disparity map from block matching algorithm (BM). The results obtained permit us to accelerate the images processing time, measured in frames per second (FPS...

  19. Audiovisual biofeedback improves image quality and reduces scan time for respiratory-gated 3D MRI

    Science.gov (United States)

    Lee, D.; Greer, P. B.; Arm, J.; Keall, P.; Kim, T.

    2014-03-01

    The purpose of this study was to test the hypothesis that audiovisual (AV) biofeedback can improve image quality and reduce scan time for respiratory-gated 3D thoracic MRI. For five healthy human subjects respiratory motion guidance in MR scans was provided using an AV biofeedback system, utilizing real-time respiratory motion signals. To investigate the improvement of respiratory-gated 3D MR images between free breathing (FB) and AV biofeedback (AV), each subject underwent two imaging sessions. Respiratory-related motion artifacts and imaging time were qualitatively evaluated in addition to the reproducibility of external (abdominal) motion. In the results, 3D MR images in AV biofeedback showed more anatomic information such as a clear distinction of diaphragm, lung lobes and sharper organ boundaries. The scan time was reduced from 401±215 s in FB to 334±94 s in AV (p-value 0.36). The root mean square variation of the displacement and period of the abdominal motion was reduced from 0.4±0.22 cm and 2.8±2.5 s in FB to 0.1±0.15 cm and 0.9±1.3 s in AV (p-value of displacement biofeedback improves image quality and reduces scan time for respiratory-gated 3D MRI. These results suggest that AV biofeedback has the potential to be a useful motion management tool in medical imaging and radiation therapy procedures.

  20. The Mathematical Foundations of 3D Compton Scatter Emission Imaging

    Directory of Open Access Journals (Sweden)

    T. T. Truong

    2007-01-01

    Full Text Available The mathematical principles of tomographic imaging using detected (unscattered X- or gamma-rays are based on the two-dimensional Radon transform and many of its variants. In this paper, we show that two new generalizations, called conical Radon transforms, are related to three-dimensional imaging processes based on detected Compton scattered radiation. The first class of conical Radon transform has been introduced recently to support imaging principles of collimated detector systems. The second class is new and is closely related to the Compton camera imaging principles and invertible under special conditions. As they are poised to play a major role in future designs of biomedical imaging systems, we present an account of their most important properties which may be relevant for active researchers in the field.

  1. 3D and 4D atlas system of living human body structure.

    Science.gov (United States)

    Suzuki, N; Takatsu, A; Hattori, A; Ezumi, T; Oda, S; Yanai, T; Tominaga, H

    1998-01-01

    A reference system for accessing anatomical information from a complete 3D structure of the whole body "living human", including 4D cardiac dynamics, was reconstructed with 3D and 4D data sets obtained from normal volunteers. With this system, we were able to produce a human atlas in which sectional images can be accessed from any part of the human body interactively by real-time image generation.

  2. Optimized 3D Street Scene Reconstruction from Driving Recorder Images

    Directory of Open Access Journals (Sweden)

    Yongjun Zhang

    2015-07-01

    Full Text Available The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building.

  3. 3D Image Sensor based on Parallax Motion

    Directory of Open Access Journals (Sweden)

    Barna Reskó

    2007-12-01

    Full Text Available For humans and visual animals vision it is the primary and the most sophisticatedperceptual modality to get information about the surrounding world. Depth perception is apart of vision allowing to accurately determine the distance to an object which makes it animportant visual task. Humans have two eyes with overlapping visual fields that enablestereo vision and thus space perception. Some birds however do not have overlappingvisual fields, and compensate this lask by moving their heads, which in turn makes spaceperception possible using the motion parallax as a visual cue. This paper presents asolution using an opto-mechanical filter that was inspired by the way birds observe theirenvironment. The filtering is done using two different approaches:using motion blur duringmotion parallax, and using the optical flow algorithm. The two methods have differentadvantages and drawbacks, which will be discussed in the paper. The proposed system canbe used in robotics for 3D space perception.

  4. Contactless operating table control based on 3D image processing.

    Science.gov (United States)

    Schröder, Stephan; Loftfield, Nina; Langmann, Benjamin; Frank, Klaus; Reithmeier, Eduard

    2014-01-01

    Interaction with mobile consumer devices leads to a higher acceptance and affinity of persons to natural user interfaces and perceptional interaction possibilities. New interaction modalities become accessible and are capable to improve human machine interaction even in complex and high risk environments, like the operation room. Here, manifold medical disciplines cause a great variety of procedures and thus staff and equipment. One universal challenge is to meet the sterility requirements, for which common contact-afflicted remote interfaces always pose a potential risk causing a hazard for the process. The proposed operating table control system overcomes this process risk and thus improves the system usability significantly. The 3D sensor system, the Microsoft Kinect, captures the motion of the user, allowing a touchless manipulation of an operating table. Three gestures enable the user to select, activate and manipulate all segments of the motorised system in a safe and intuitive way. The gesture dynamics are synchronised with the table movement. In a usability study, 15 participants evaluated the system with a system usability score by Broke of 79. This states a high potential for implementation and acceptance in interventional environments. In the near future, even processes with higher risks could be controlled with the proposed interface, while interfaces become safer and more direct.

  5. Protein 3D Structure Image - PSCDB | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us PSCDB Protein 3D Structure Image Data detail Data name Protein 3D Structure Image DOI 10.189...tory of This Database Site Policy | Contact Us Protein 3D Structure Image - PSCDB | LSDB Archive ...

  6. Reliable planning and monitoring tools by dismantling 3D photographic image of high resolution and document management systems. Application MEDS system; Planificacion fiable y seguimiento del desmantelamiento mediante herramientas 3D, imagen fotografica de alta resolucion y sistemas de gestion documental. Aplicacion del sistema MEDS

    Energy Technology Data Exchange (ETDEWEB)

    Vela Morales, F.

    2010-07-01

    MEDS system (Metric Environment Documentation System) is a method developed by CT3 based engineering documentation generation metric of a physical environment using measurement tools latest technology and high precision, such as the Laser Scanner. With this equipment it is possible to obtain three-dimensional information of a physical environment through the 3D coordinates of millions of points. This information is processed by software that is very useful tool for modeling operations and 3D simulations.

  7. A neural network based 3D/3D image registration quality evaluator for the head-and-neck patient setup in the absence of a ground truth

    Energy Technology Data Exchange (ETDEWEB)

    Wu Jian; Murphy, Martin J. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)

    2010-11-15

    Purpose: To develop a neural network based registration quality evaluator (RQE) that can identify unsuccessful 3D/3D image registrations for the head-and-neck patient setup in radiotherapy. Methods: A two-layer feed-forward neural network was used as a RQE to classify 3D/3D rigid registration solutions as successful or unsuccessful based on the features of the similarity surface near the point-of-solution. The supervised training and test data sets were generated by rigidly registering daily cone-beam CTs to the treatment planning fan-beam CTs of six patients with head-and-neck tumors. Two different similarity metrics (mutual information and mean-squared intensity difference) and two different types of image content (entire image versus bony landmarks) were used. The best solution for each registration pair was selected from 50 optimizing attempts that differed only by the initial transformation parameters. The distance from each individual solution to the best solution in the normalized parametrical space was compared to a user-defined error threshold to determine whether that solution was successful or not. The supervised training was then used to train the RQE. The performance of the RQE was evaluated using the test data set that consisted of registration results that were not used in training. Results: The RQE constructed using the mutual information had very good performance when tested using the test data sets, yielding the sensitivity, the specificity, the positive predictive value, and the negative predictive value in the ranges of 0.960-1.000, 0.993-1.000, 0.983-1.000, and 0.909-1.000, respectively. Adding a RQE into a conventional 3D/3D image registration system incurs only about 10%-20% increase of the overall processing time. Conclusions: The authors' patient study has demonstrated very good performance of the proposed RQE when used with the mutual information in identifying unsuccessful 3D/3D registrations for daily patient setup. The classifier

  8. Study on portable optical 3D coordinate measuring system

    Science.gov (United States)

    Ren, Tongqun; Zhu, Jigui; Guo, Yinbiao

    2009-05-01

    A portable optical 3D coordinate measuring system based on digital Close Range Photogrammetry (CRP) technology and binocular stereo vision theory is researched. Three ultra-red LED with high stability is set on a hand-hold target to provide measuring feature and establish target coordinate system. Ray intersection based field directional calibrating is done for the intersectant binocular measurement system composed of two cameras by a reference ruler. The hand-hold target controlled by Bluetooth wireless communication is free moved to implement contact measurement. The position of ceramic contact ball is pre-calibrated accurately. The coordinates of target feature points are obtained by binocular stereo vision model from the stereo images pair taken by cameras. Combining radius compensation for contact ball and residual error correction, object point can be resolved by transfer of axes using target coordinate system as intermediary. This system is suitable for on-field large-scale measurement because of its excellent portability, high precision, wide measuring volume, great adaptability and satisfying automatization. It is tested that the measuring precision is near to +/-0.1mm/m.

  9. 3D Modelling of Biological Systems for Biomimetics

    Institute of Scientific and Technical Information of China (English)

    Shujun Zhang; Kevin Hapeshi; Ashok K. Bhattacharya

    2004-01-01

    With the advanced development of computer-based enabling technologies, many engineering, medical, biology,chemistry, physics and food science etc have developed to the unprecedented levels, which lead to many research and development interests in various multi-discipline areas. Among them, biomimetics is one of the most promising and attractive branches of study. Biomimetics is a branch of study that uses biological systems as a model to develop synthetic systems.To learn from nature, one of the fundamental issues is to understand the natural systems such animals, insects, plants and human beings etc. The geometrical characterization and representation of natural systems is an important fundamental work for biomimetics research. 3D modeling plays a key role in the geometrical characterization and representation, especially in computer graphical visualization. This paper firstly presents the typical procedure of 3D modelling methods and then reviews the previous work of 3D geometrical modelling techniques and systems developed for industrial, medical and animation applications. Especially the paper discusses the problems associated with the existing techniques and systems when they are applied to 3D modelling of biological systems. Based upon the discussions, the paper proposes some areas of research interests in 3D modelling of biological systems and for Biomimetics.

  10. Advanced 3D Sensing and Visualization System for Unattended Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.J.; Little, C.Q.; Nelson, C.L.

    1999-01-01

    The purpose of this project was to create a reliable, 3D sensing and visualization system for unattended monitoring. The system provides benefits for several of Sandia's initiatives including nonproliferation, treaty verification, national security and critical infrastructure surety. The robust qualities of the system make it suitable for both interior and exterior monitoring applications. The 3D sensing system combines two existing sensor technologies in a new way to continuously maintain accurate 3D models of both static and dynamic components of monitored areas (e.g., portions of buildings, roads, and secured perimeters in addition to real-time estimates of the shape, location, and motion of humans and moving objects). A key strength of this system is the ability to monitor simultaneous activities on a continuous basis, such as several humans working independently within a controlled workspace, while also detecting unauthorized entry into the workspace. Data from the sensing system is used to identi~ activities or conditions that can signi~ potential surety (safety, security, and reliability) threats. The system could alert a security operator of potential threats or could be used to cue other detection, inspection or warning systems. An interactive, Web-based, 3D visualization capability was also developed using the Virtual Reality Modeling Language (VRML). The intex%ace allows remote, interactive inspection of a monitored area (via the Internet or Satellite Links) using a 3D computer model of the area that is rendered from actual sensor data.

  11. 3D visualization of biomedical CT images based on OpenGL and VRML techniques

    Science.gov (United States)

    Yin, Meng; Luo, Qingming; Xia, Fuhua

    2002-04-01

    Current high-performance computers and advanced image processing capabilities have made the application of three- dimensional visualization objects in biomedical computer tomographic (CT) images facilitate the researches on biomedical engineering greatly. Trying to cooperate with the update technology using Internet, where 3D data are typically stored and processed on powerful servers accessible by using TCP/IP, we should hold the results of the isosurface be applied in medical visualization generally. Furthermore, this project is a future part of PACS system our lab is working on. So in this system we use the 3D file format VRML2.0, which is used through the Web interface for manipulating 3D models. In this program we implemented to generate and modify triangular isosurface meshes by marching cubes algorithm. Then we used OpenGL and MFC techniques to render the isosurface and manipulating voxel data. This software is more adequate visualization of volumetric data. The drawbacks are that 3D image processing on personal computers is rather slow and the set of tools for 3D visualization is limited. However, these limitations have not affected the applicability of this platform for all the tasks needed in elementary experiments in laboratory or data preprocessed.

  12. Multi-detector CT and 3D imaging in a multi-vendor PACS environment

    NARCIS (Netherlands)

    van Ooijen, PMA; Witkamp, R; Oudkerk, M; Lemke, HU; Inamura, K; Doi, K; Vannier, MW; Farman, AG; Reiber, JHC

    2003-01-01

    Introduction of new hard- and software techniques like Multi-Dectector Computed Tomography (MDCT) and 3D imaging has put new demands on the Picture Archiving and Communications System (PACS) environment within the radiology department. The daily use of these new techniques requires a good integratio

  13. Building 3D aerial image in photoresist with reconstructed mask image acquired with optical microscope

    Science.gov (United States)

    Chou, C. S.; Tang, Y. P.; Chu, F. S.; Huang, W. C.; Liu, R. G.; Gau, T. S.

    2012-03-01

    Calibration of mask images on wafer becomes more important as features shrink. Two major types of metrology have been commonly adopted. One is to measure the mask image with scanning electron microscope (SEM) to obtain the contours on mask and then simulate the wafer image with optical simulator. The other is to use an optical imaging tool Aerial Image Measurement System (AIMSTM) to emulate the image on wafer. However, the SEM method is indirect. It just gathers planar contours on a mask with no consideration of optical characteristics such as 3D topography structures. Hence, the image on wafer is not predicted precisely. Though the AIMSTM method can be used to directly measure the intensity at the near field of a mask but the image measured this way is not quite the same as that on the wafer due to reflections and refractions in the films on wafer. Here, a new approach is proposed to emulate the image on wafer more precisely. The behavior of plane waves with different oblique angles is well known inside and between planar film stacks. In an optical microscope imaging system, plane waves can be extracted from the pupil plane with a coherent point source of illumination. Once plane waves with a specific coherent illumination are analyzed, the partially coherent component of waves could be reconstructed with a proper transfer function, which includes lens aberration, polarization, reflection and refraction in films. It is a new method that we can transfer near light field of a mask into an image on wafer without the disadvantages of indirect SEM measurement such as neglecting effects of mask topography, reflections and refractions in the wafer film stacks. Furthermore, with this precise latent image, a separated resist model also becomes more achievable.

  14. Imaging articular cartilage defects with 3D fat-suppressed echo planar imaging: comparison with conventional 3D fat-suppressed gradient echo sequence and correlation with histology.

    Science.gov (United States)

    Trattnig, S; Huber, M; Breitenseher, M J; Trnka, H J; Rand, T; Kaider, A; Helbich, T; Imhof, H; Resnick, D

    1998-01-01

    Our goal was to shorten examination time in articular cartilage imaging by use of a recently developed 3D multishot echo planar imaging (EPI) sequence with fat suppression (FS). We performed comparisons with 3D FS GE sequence using histology as the standard of reference. Twenty patients with severe gonarthrosis who were scheduled for total knee replacement underwent MRI prior to surgery. Hyaline cartilage was imaged with a 3D FS EPI and a 3D FS GE sequence. Signal intensities of articular structures were measured, and contrast-to-noise (C/N) ratios were calculated. Each knee was subdivided into 10 cartilage surfaces. From a total of 188 (3D EPI sequence) and 198 (3D GE sequence) cartilage surfaces, 73 and 79 histologic specimens could be obtained and analyzed. MR grading of cartilage lesions on both sequences was based on a five grade classification scheme and compared with histologic grading. The 3D FS EPI sequence provided a high C/N ratio between cartilage and subchondral bone similar to that of the 3D FS GE sequence. The C/N ratio between cartilage and effusion was significantly lower on the 3D EPI sequence due to higher signal intensity of fluid. MR grading of cartilage abnormalities using 3D FS EPI and 3D GE sequence correlated well with histologic grading. 3D FS EPI sequence agreed within one grade in 69 of 73 (94.5%) histologically proven cartilage lesions and 3D FS GE sequence agreed within one grade in 76 of 79 (96.2%) lesions. The gradings were identical in 38 of 73 (52.1%) and in 46 of 79 (58.3%) cases, respectively. The difference between the sensitivities was statistically not significant. The 3D FS EPI sequence is comparable with the 3D FS GE sequence in the noninvasive evaluation of advanced cartilage abnormalities but reduces scan time by a factor of 4.

  15. 3D mapping from high resolution satellite images

    Science.gov (United States)

    Goulas, D.; Georgopoulos, A.; Sarakenos, A.; Paraschou, Ch.

    2013-08-01

    In recent years 3D information has become more easily available. Users' needs are constantly increasing, adapting to this reality and 3D maps are in more demand. 3D models of the terrain in CAD or other environments have already been common practice; however one is bound by the computer screen. This is why contemporary digital methods have been developed in order to produce portable and, hence, handier 3D maps of various forms. This paper deals with the implementation of the necessary procedures to produce holographic 3D maps and three dimensionally printed maps. The main objective is the production of three dimensional maps from high resolution aerial and/or satellite imagery with the use of holography and but also 3D printing methods. As study area the island of Antiparos was chosen, as there were readily available suitable data. These data were two stereo pairs of Geoeye-1 and a high resolution DTM of the island. Firstly the theoretical bases of holography and 3D printing are described, and the two methods are analyzed and there implementation is explained. In practice a x-axis parallax holographic map of the Antiparos Island is created and a full parallax (x-axis and y-axis) holographic map is created and printed, using the holographic method. Moreover a three dimensional printed map of the study area has been created using 3dp (3d printing) method. The results are evaluated for their usefulness and efficiency.

  16. Automated 3D renal segmentation based on image partitioning

    Science.gov (United States)

    Yeghiazaryan, Varduhi; Voiculescu, Irina D.

    2016-03-01

    Despite several decades of research into segmentation techniques, automated medical image segmentation is barely usable in a clinical context, and still at vast user time expense. This paper illustrates unsupervised organ segmentation through the use of a novel automated labelling approximation algorithm followed by a hypersurface front propagation method. The approximation stage relies on a pre-computed image partition forest obtained directly from CT scan data. We have implemented all procedures to operate directly on 3D volumes, rather than slice-by-slice, because our algorithms are dimensionality-independent. The results picture segmentations which identify kidneys, but can easily be extrapolated to other body parts. Quantitative analysis of our automated segmentation compared against hand-segmented gold standards indicates an average Dice similarity coefficient of 90%. Results were obtained over volumes of CT data with 9 kidneys, computing both volume-based similarity measures (such as the Dice and Jaccard coefficients, true positive volume fraction) and size-based measures (such as the relative volume difference). The analysis considered both healthy and diseased kidneys, although extreme pathological cases were excluded from the overall count. Such cases are difficult to segment both manually and automatically due to the large amplitude of Hounsfield unit distribution in the scan, and the wide spread of the tumorous tissue inside the abdomen. In the case of kidneys that have maintained their shape, the similarity range lies around the values obtained for inter-operator variability. Whilst the procedure is fully automated, our tools also provide a light level of manual editing.

  17. An annotation system for 3D fluid flow visualization

    Science.gov (United States)

    Loughlin, Maria M.; Hughes, John F.

    1995-01-01

    Annotation is a key activity of data analysis. However, current systems for data analysis focus almost exclusively on visualization. We propose a system which integrates annotations into a visualization system. Annotations are embedded in 3D data space, using the Post-it metaphor. This embedding allows contextual-based information storage and retrieval, and facilitates information sharing in collaborative environments. We provide a traditional database filter and a Magic Lens filter to create specialized views of the data. The system has been customized for fluid flow applications, with features which allow users to store parameters of visualization tools and sketch 3D volumes.

  18. AR based ornament design system for 3D printing

    Directory of Open Access Journals (Sweden)

    Hiroshi Aoki

    2015-01-01

    Full Text Available In recent years, 3D printers have become popular as a means of outputting geometries designed on CAD or 3D graphics systems. However, the complex user interfaces of standard 3D software can make it difficult for ordinary consumers to design their own objects. Furthermore, models designed on 3D graphics software often have geometrical problems that make them impossible to output on a 3D printer. We propose a novel AR (augmented reality 3D modeling system with an air-spray like interface. We also propose a new data structure (octet voxel for representing designed models in such a way that the model is guaranteed to be a complete solid. The target shape is based on a regular polyhedron, and the octet voxel representation is suitable for designing geometrical objects having the same symmetries as the base regular polyhedron. Finally, we conducted a user test and confirmed that users can intuitively design their own ornaments in a short time with a simple user interface.

  19. Markerless 3D Head Tracking for Motion Correction in High Resolution PET Brain Imaging

    DEFF Research Database (Denmark)

    Olesen, Oline Vinter

    images. Incorrect motion correction can in the worst cases result in wrong diagnosis or treatment. The evolution of a markerless custom-made structured light 3D surface tracking system is presented. The system is targeted at state-of-the-art high resolution dedicated brain PET scanners with a resolution......This thesis concerns application specific 3D head tracking. The purpose is to improve motion correction in position emission tomography (PET) brain imaging through development of markerless tracking. Currently, motion correction strategies are based on either the PET data itself or tracking devices...... of a few millimeters. Stateof- the-art hardware and software solutions are integrated into an operational device. This novel system is tested against a commercial tracking system popular in PET brain imaging. Testing and demonstrations are carried out in clinical settings. A compact markerless tracking...

  20. Rainbow Particle Imaging Velocimetry for Dense 3D Fluid Velocity Imaging

    KAUST Repository

    Xiong, Jinhui

    2017-04-11

    Despite significant recent progress, dense, time-resolved imaging of complex, non-stationary 3D flow velocities remains an elusive goal. In this work we tackle this problem by extending an established 2D method, Particle Imaging Velocimetry, to three dimensions by encoding depth into color. The encoding is achieved by illuminating the flow volume with a continuum of light planes (a “rainbow”), such that each depth corresponds to a specific wavelength of light. A diffractive component in the camera optics ensures that all planes are in focus simultaneously. For reconstruction, we derive an image formation model for recovering stationary 3D particle positions. 3D velocity estimation is achieved with a variant of 3D optical flow that accounts for both physical constraints as well as the rainbow image formation model. We evaluate our method with both simulations and an experimental prototype setup.

  1. 3D Surface Realignment Tracking for Medical Imaging: A Phantom Study with PET Motion Correction

    DEFF Research Database (Denmark)

    Olesen, Oline Vinter; Paulsen, Rasmus Reinhold; Jensen, Rasmus Ramsbøl

    2011-01-01

    We present a complete system for motion correction in high resolution brain positron emission tomography (PET) imaging. It is based on a compact structured light scanner mounted above the patient tunnel of the Siemens High Resolution Research Tomograph PET brain scanner. The structured light system...... is equipped with a near infrared diode and uses phase-shift interferometry to compute 3D representations of the forehead of the patient. These 3D point clouds are progressively aligned to a reference surface and thereby giving the head pose changes. The estimated pose changes are used to reposition a sequence...

  2. 3D Surface Realignment Tracking for Medical Imaging: A Phantom Study with PET Motion Correction

    DEFF Research Database (Denmark)

    Olesen, Oline Vinter; Paulsen, Rasmus Reinhold; Jensen, Rasmus Ramsbøl

    2011-01-01

    We present a complete system for motion correction in high resolution brain positron emission tomography (PET) imaging. It is based on a compact structured light scanner mounted above the patient tunnel of the Siemens High Resolution Research Tomograph PET brain scanner. The structured light system...... is equipped with a near infrared diode and uses phase-shift interferometry to compute 3D representations of the forehead of the patient. These 3D point clouds are progressively aligned to a reference surface and thereby giving the head pose changes. The estimated pose changes are used to reposition a sequence...

  3. High-Quality See-Through Surgical Guidance System Using Enhanced 3-D Autostereoscopic Augmented Reality.

    Science.gov (United States)

    Zhang, Xinran; Chen, Guowen; Liao, Hongen

    2017-08-01

    Precise minimally invasive surgery (MIS) has significant advantages over traditional open surgery in clinic. Although pre-/intraoperative diagnosis images can provide necessary guidance for therapy, hand-eye discoordination occurs when guidance information is displayed away from the surgical area. In this study, we introduce a real three-dimensional (3-D) see-through guidance system for precision surgery. To address the resolution and viewing angle limitation as well as the accuracy degradation problems of autostereoscopic 3-D display, we design a high quality and high accuracy 3-D integral videography (IV) medical image display method. Furthermore, a novel see-through microscopic device is proposed to assist surgeons with the superimposition of real 3-D guidance onto the surgical target is magnified by an optical visual magnifier module. Spatial resolutions of 3-D IV image in different depths have been increased 50%∼70%, viewing angles of different image sizes have been increased 9%∼19% compared with conventional IV display methods. Average accuracy of real 3-D guidance superimposed on surgical target was 0.93 mm ± 0.41 mm. Preclinical studies demonstrated that our system could provide real 3-D perception of anatomic structures inside the patient's body. The system showed potential clinical feasibility to provide intuitive and accurate in situ see-through guidance for microsurgery without restriction on observers' viewing position. Our system can effectively improve the precision and reliability of surgical guidance. It will have wider applicability in surgical planning, microscopy, and other fields.

  4. Automatic masking for robust 3D-2D image registration in image-guided spine surgery

    Science.gov (United States)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2016-03-01

    During spinal neurosurgery, patient-specific information, planning, and annotation such as vertebral labels can be mapped from preoperative 3D CT to intraoperative 2D radiographs via image-based 3D-2D registration. Such registration has been shown to provide a potentially valuable means of decision support in target localization as well as quality assurance of the surgical product. However, robust registration can be challenged by mismatch in image content between the preoperative CT and intraoperative radiographs, arising, for example, from anatomical deformation or the presence of surgical tools within the radiograph. In this work, we develop and evaluate methods for automatically mitigating the effect of content mismatch by leveraging the surgical planning data to assign greater weight to anatomical regions known to be reliable for registration and vital to the surgical task while removing problematic regions that are highly deformable or often occluded by surgical tools. We investigated two approaches to assigning variable weight (i.e., "masking") to image content and/or the similarity metric: (1) masking the preoperative 3D CT ("volumetric masking"); and (2) masking within the 2D similarity metric calculation ("projection masking"). The accuracy of registration was evaluated in terms of projection distance error (PDE) in 61 cases selected from an IRB-approved clinical study. The best performing of the masking techniques was found to reduce the rate of gross failure (PDE > 20 mm) from 11.48% to 5.57% in this challenging retrospective data set. These approaches provided robustness to content mismatch and eliminated distinct failure modes of registration. Such improvement was gained without additional workflow and has motivated incorporation of the masking methods within a system under development for prospective clinical studies.

  5. Clinical evaluation of 2D versus 3D whole-body PET image quality using a dedicated BGO PET scanner

    Energy Technology Data Exchange (ETDEWEB)

    Visvikis, D. [CHU Morvan, U650 INSERM, Laboratoire de Traitement de l' Information Medicale (LaTIM), Brest (France); Griffiths, D. [Lister Healthcare, London PET Centre, London (United Kingdom); Costa, D.C. [Middlesex Hospital, Institute of Nuclear Medicine, Royal Free and University College Medical School, London (United Kingdom); HPP Medicina Molecular, SA Porto (Portugal); Bomanji, J.; Ell, P.J. [Middlesex Hospital, Institute of Nuclear Medicine, Royal Free and University College Medical School, London (United Kingdom)

    2005-09-01

    Three-dimensional positron emission tomography (3D PET) results in higher system sensitivity, with an associated increase in the detection of scatter and random coincidences. The objective of this work was to compare, from a clinical perspective, 3D and two-dimensional (2D) acquisitions in terms of whole-body (WB) PET image quality with a dedicated BGO PET system. 2D and 3D WB emission acquisitions were carried out in 70 patients. Variable acquisition parameters in terms of time of emission acquisition per axial field of view (aFOV) and slice overlap between sequential aFOVs were used during the 3D acquisitions. 3D and 2D