WorldWideScience

Sample records for multi-camera realtime 3d

  1. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    Science.gov (United States)

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  2. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2016-06-01

    Full Text Available For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP system combining Multi-View Stereovision (MVS with the Structure from Motion (SfM algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98 and 0.57 mm (R2 = 0.99, respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency.

  3. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    Science.gov (United States)

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  4. Real-time multi-camera video acquisition and processing platform for ADAS

    Science.gov (United States)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  5. A multi-camera system for real-time pose estimation

    Science.gov (United States)

    Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin

    2007-04-01

    This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.

  6. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    Science.gov (United States)

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-04-14

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  7. Multi-Camera and Structured-Light Vision System (MSVS for Dynamic High-Accuracy 3D Measurements of Railway Tunnels

    Directory of Open Access Journals (Sweden)

    Dong Zhan

    2015-04-01

    Full Text Available Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS. First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  8. Real-time vehicle matching for multi-camera tunnel surveillance

    Science.gov (United States)

    Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried

    2011-03-01

    Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.

  9. Real-Time 3D Visualization

    Science.gov (United States)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  10. 3D for the people: multi-camera motion capture in the field with consumer-grade cameras and open source software

    Directory of Open Access Journals (Sweden)

    Brandon E. Jackson

    2016-09-01

    Full Text Available Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts.

  11. Novel methods for real-time 3D facial recognition

    OpenAIRE

    Rodrigues, Marcos; Robinson, Alan

    2010-01-01

    In this paper we discuss our approach to real-time 3D face recognition. We argue the need for real time operation in a realistic scenario and highlight the required pre- and post-processing operations for effective 3D facial recognition. We focus attention to some operations including face and eye detection, and fast post-processing operations such as hole filling, mesh smoothing and noise removal. We consider strategies for hole filling such as bilinear and polynomial interpolation and Lapla...

  12. Real-Time 3D Profile Measurement Using Structured Light

    International Nuclear Information System (INIS)

    Xu, L; Zhang, Z J; Ma, H; Yu, Y J

    2006-01-01

    The paper builds a real-time system of 3D profile measurement using structured-light imaging. It allows a hand-held object to rotate free in the space-time coded light field, which is projected by the projector. The surface of measured objects with projected coded light is imaged; the system shows surface reconstruction results of objects online. This feedback helps user to adjust object's pose in the light field according to the dismissed or error data, which would achieve the integrality of data used in reconstruction. This method can acquire denser data cloud and have higher reconstruction accuracy and efficiency. According to the real-time requirements, the paper presents the non-restricted light plane modelling which suits stripe structured light system, designs the three-frame stripes space-time coded pattern, and uses the advance ICP algorithms to acquire 3D data alignment from multiple view

  13. Towards real-time 3D ultrasound planning and personalized 3D printing for breast HDR brachytherapy treatment

    International Nuclear Information System (INIS)

    Poulin, Eric; Gardi, Lori; Fenster, Aaron; Pouliot, Jean; Beaulieu, Luc

    2015-01-01

    Two different end-to-end procedures were tested for real-time planning in breast HDR brachytherapy treatment. Both methods are using a 3D ultrasound (3DUS) system and a freehand catheter optimization algorithm. They were found fast and efficient. We demonstrated a proof-of-concept approach for personalized real-time guidance and planning to breast HDR brachytherapy treatments

  14. REAL-TIME CAMERA GUIDANCE FOR 3D SCENE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    F. Schindler

    2012-07-01

    Full Text Available We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  15. Integration of real-time 3D capture, reconstruction, and light-field display

    Science.gov (United States)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  16. Real-time quasi-3D tomographic reconstruction

    Science.gov (United States)

    Buurlage, Jan-Willem; Kohr, Holger; Palenstijn, Willem Jan; Joost Batenburg, K.

    2018-06-01

    Developments in acquisition technology and a growing need for time-resolved experiments pose great computational challenges in tomography. In addition, access to reconstructions in real time is a highly demanded feature but has so far been out of reach. We show that by exploiting the mathematical properties of filtered backprojection-type methods, having access to real-time reconstructions of arbitrarily oriented slices becomes feasible. Furthermore, we present , software for visualization and on-demand reconstruction of slices. A user of can interactively shift and rotate slices in a GUI, while the software updates the slice in real time. For certain use cases, the possibility to study arbitrarily oriented slices in real time directly from the measured data provides sufficient visual and quantitative insight. Two such applications are discussed in this article.

  17. PRIMAS: a real-time 3D motion-analysis system

    Science.gov (United States)

    Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans

    1994-03-01

    The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.

  18. Real-time 3D-surface-guided head refixation useful for fractionated stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Li Shidong; Liu Dezhi; Yin Gongjie; Zhuang Ping; Geng, Jason

    2006-01-01

    Accurate and precise head refixation in fractionated stereotactic radiotherapy has been achieved through alignment of real-time 3D-surface images with a reference surface image. The reference surface image is either a 3D optical surface image taken at simulation with the desired treatment position, or a CT/MRI-surface rendering in the treatment plan with corrections for patient motion during CT/MRI scans and partial volume effects. The real-time 3D surface images are rapidly captured by using a 3D video camera mounted on the ceiling of the treatment vault. Any facial expression such as mouth opening that affects surface shape and location can be avoided using a new facial monitoring technique. The image artifacts on the real-time surface can generally be removed by setting a threshold of jumps at the neighboring points while preserving detailed features of the surface of interest. Such a real-time surface image, registered in the treatment machine coordinate system, provides a reliable representation of the patient head position during the treatment. A fast automatic alignment between the real-time surface and the reference surface using a modified iterative-closest-point method leads to an efficient and robust surface-guided target refixation. Experimental and clinical results demonstrate the excellent efficacy of <2 min set-up time, the desired accuracy and precision of <1 mm in isocenter shifts, and <1 deg. in rotation

  19. A Spatial Reference Grid for Real-Time Autonomous Underwater Modeling using 3-D Sonar

    Energy Technology Data Exchange (ETDEWEB)

    Auran, P.G.

    1996-12-31

    The offshore industry has recognized the need for intelligent underwater robotic vehicles. This doctoral thesis deals with autonomous underwater vehicles (AUVs) and concentrates on a data representation for real-time image formation and analysis. Its main objective is to develop a 3-D image representation suitable for autonomous perception objectives underwater, assuming active sonar as the main sensor for perception. The main contributions are: (1) A dynamical image representation for 3-D range data, (2) A basic electronic circuit and software system for 3-D sonar sampling and amplitude thresholding, (3) A model for target reliability, (4) An efficient connected components algorithm for 3-D segmentation, (5) A method for extracting general 3-D geometrical representations from segmented echo clusters, (6) Experimental results of planar and curved target modeling. 142 refs., 120 figs., 10 tabs.

  20. Holovideo: Real-time 3D range video encoding and decoding on GPU

    Science.gov (United States)

    Karpinsky, Nikolaus; Zhang, Song

    2012-02-01

    We present a 3D video-encoding technique called Holovideo that is capable of encoding high-resolution 3D videos into standard 2D videos, and then decoding the 2D videos back into 3D rapidly without significant loss of quality. Due to the nature of the algorithm, 2D video compression such as JPEG encoding with QuickTime Run Length Encoding (QTRLE) can be applied with little quality loss, resulting in an effective way to store 3D video at very small file sizes. We found that under a compression ratio of 134:1, Holovideo to OBJ file format, the 3D geometry quality drops at a negligible level. Several sets of 3D videos were captured using a structured light scanner, compressed using the Holovideo codec, and then uncompressed and displayed to demonstrate the effectiveness of the codec. With the use of OpenGL Shaders (GLSL), the 3D video codec can encode and decode in realtime. We demonstrated that for a video size of 512×512, the decoding speed is 28 frames per second (FPS) with a laptop computer using an embedded NVIDIA GeForce 9400 m graphics processing unit (GPU). Encoding can be done with this same setup at 18 FPS, making this technology suitable for applications such as interactive 3D video games and 3D video conferencing.

  1. Strain measurement of abdominal aortic aneurysm with real-time 3D ultrasound speckle tracking.

    Science.gov (United States)

    Bihari, P; Shelke, A; Nwe, T H; Mularczyk, M; Nelson, K; Schmandra, T; Knez, P; Schmitz-Rixen, T

    2013-04-01

    Abdominal aortic aneurysm rupture is caused by mechanical vascular tissue failure. Although mechanical properties within the aneurysm vary, currently available ultrasound methods assess only one cross-sectional segment of the aorta. This study aims to establish real-time 3-dimensional (3D) speckle tracking ultrasound to explore local displacement and strain parameters of the whole abdominal aortic aneurysm. Validation was performed on a silicone aneurysm model, perfused in a pulsatile artificial circulatory system. Wall motion of the silicone model was measured simultaneously with a commercial real-time 3D speckle tracking ultrasound system and either with laser-scan micrometry or with video photogrammetry. After validation, 3D ultrasound data were collected from abdominal aortic aneurysms of five patients and displacement and strain parameters were analysed. Displacement parameters measured in vitro by 3D ultrasound and laser scan micrometer or video analysis were significantly correlated at pulse pressures between 40 and 80 mmHg. Strong local differences in displacement and strain were identified within the aortic aneurysms of patients. Local wall strain of the whole abdominal aortic aneurysm can be analysed in vivo with real-time 3D ultrasound speckle tracking imaging, offering the prospect of individual non-invasive rupture risk analysis of abdominal aortic aneurysms. Copyright © 2013 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  2. On the Feasibility of Real-Time 3D Hand Tracking using Edge GPGPU Acceleration

    DEFF Research Database (Denmark)

    Qammaz, A.; Kosta, S.; Kyriazis, N.

    2018-01-01

    This paper presents the case study of a non-intrusive porting of a monolithic C++ library for real-time 3D hand tracking, to the domain of edge-based computation. Towards a proof of concept, the case study considers a pair of workstations, a computationally powerful and a computationally weak one...

  3. A Comparison of Iterative 2D-3D Pose Estimation Methods for Real-Time Applications

    DEFF Research Database (Denmark)

    Grest, Daniel; Krüger, Volker; Petersen, Thomas

    2009-01-01

    This work compares iterative 2D-3D Pose Estimation methods for use in real-time applications. The compared methods are available for public as C++ code. One method is part of the openCV library, namely POSIT. Because POSIT is not applicable for planar 3Dpoint congurations, we include the planar P...

  4. Real-time 3-D space numerical shake prediction for earthquake early warning

    Science.gov (United States)

    Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang

    2017-12-01

    In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

  5. An experiment of a 3D real-time robust visual odometry for intelligent vehicles

    OpenAIRE

    Rodriguez Florez , Sergio Alberto; Fremont , Vincent; Bonnifait , Philippe

    2009-01-01

    International audience; Vision systems are nowadays very promising for many on-board vehicles perception functionalities, like obstacles detection/recognition and ego-localization. In this paper, we present a 3D visual odometric method that uses a stereo-vision system to estimate the 3D ego-motion of a vehicle in outdoor road conditions. In order to run in real-time, the studied technique is sparse meaning that it makes use of feature points that are tracked during several frames. A robust sc...

  6. Automatic multi-camera calibration for deployable positioning systems

    Science.gov (United States)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  7. Real-time 3D human capture system for mixed-reality art and entertainment.

    Science.gov (United States)

    Nguyen, Ta Huynh Duy; Qui, Tran Cong Thien; Xu, Ke; Cheok, Adrian David; Teo, Sze Lee; Zhou, ZhiYing; Mallawaarachchi, Asitha; Lee, Shang Ping; Liu, Wei; Teo, Hui Siang; Thang, Le Nam; Li, Yu; Kato, Hirokazu

    2005-01-01

    A real-time system for capturing humans in 3D and placing them into a mixed reality environment is presented in this paper. The subject is captured by nine cameras surrounding her. Looking through a head-mounted-display with a camera in front pointing at a marker, the user can see the 3D image of this subject overlaid onto a mixed reality scene. The 3D images of the subject viewed from this viewpoint are constructed using a robust and fast shape-from-silhouette algorithm. The paper also presents several techniques to produce good quality and speed up the whole system. The frame rate of our system is around 25 fps using only standard Intel processor-based personal computers. Besides a remote live 3D conferencing and collaborating system, we also describe an application of the system in art and entertainment, named Magic Land, which is a mixed reality environment where captured avatars of human and 3D computer generated virtual animations can form an interactive story and play with each other. This system demonstrates many technologies in human computer interaction: mixed reality, tangible interaction, and 3D communication. The result of the user study not only emphasizes the benefits, but also addresses some issues of these technologies.

  8. 3D-Pathology: a real-time system for quantitative diagnostic pathology and visualisation in 3D

    Science.gov (United States)

    Gottrup, Christian; Beckett, Mark G.; Hager, Henrik; Locht, Peter

    2005-02-01

    This paper presents the results of the 3D-Pathology project conducted under the European EC Framework 5. The aim of the project was, through the application of 3D image reconstruction and visualization techniques, to improve the diagnostic and prognostic capabilities of medical personnel when analyzing pathological specimens using transmitted light microscopy. A fully automated, computer-controlled microscope system has been developed to capture 3D images of specimen content. 3D image reconstruction algorithms have been implemented and applied to the acquired volume data in order to facilitate the subsequent 3D visualization of the specimen. Three potential application fields, immunohistology, cromogenic in situ hybridization (CISH) and cytology, have been tested using the prototype system. For both immunohistology and CISH, use of the system furnished significant additional information to the pathologist.

  9. An active robot vision system for real-time 3-D structure recovery

    Energy Technology Data Exchange (ETDEWEB)

    Juvin, D. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d`Electronique et d`Instrumentation Nucleaire; Boukir, S.; Chaumette, F.; Bouthemy, P. [Rennes-1 Univ., 35 (France)

    1993-10-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up.

  10. An active robot vision system for real-time 3-D structure recovery

    International Nuclear Information System (INIS)

    Juvin, D.

    1993-01-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up

  11. 3D real-time monitoring system for LHD plasma heating experiment

    International Nuclear Information System (INIS)

    Emoto, M.; Narlo, J.; Kaneko, O.; Komori, A.; Iima, M.; Yamaguchi, S.; Sudo, S.

    2001-01-01

    The JAVA-based real-time monitoring system has been in use at the National Institute for Fusion Science, Japan, since the end of March 1988 to maintain stable operations. This system utilizes JAVA technology to realize its platform-independent nature. The main programs are written as JAVA applets and provide human-friendly interfaces. In order to enhance the system's easy-recognition nature, a 3D feature is added. Since most of the system is written mainly in JAVA language, we adopted JAVA3D technology, which was easy to incorporate into the current running systems. With this 3D feature, the operator can more easily find the malfunctioning parts of complex instruments, such as LHD vacuum vessels. This feature is also helpful for recognizing physical phenomena. In this paper, we present an example in which the temperature increases of a vacuum vessel after NBI are visualized

  12. Handheld real-time volumetric 3-D gamma-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Haefner, Andrew, E-mail: ahaefner@lbl.gov [Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Barnowski, Ross [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720 (United States); Luke, Paul; Amman, Mark [Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Vetter, Kai [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720 (United States); Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States)

    2017-06-11

    This paper presents the concept of real-time fusion of gamma-ray imaging and visual scene data for a hand-held mobile Compton imaging system in 3-D. The ability to obtain and integrate both gamma-ray and scene data from a mobile platform enables improved capabilities in the localization and mapping of radioactive materials. This not only enhances the ability to localize these materials, but it also provides important contextual information of the scene which once acquired can be reviewed and further analyzed subsequently. To demonstrate these concepts, the high-efficiency multimode imager (HEMI) is used in a hand-portable implementation in combination with a Microsoft Kinect sensor. This sensor, in conjunction with open-source software, provides the ability to create a 3-D model of the scene and to track the position and orientation of HEMI in real-time. By combining the gamma-ray data and visual data, accurate 3-D maps of gamma-ray sources are produced in real-time. This approach is extended to map the location of radioactive materials within objects with unknown geometry.

  13. Real-Time 3d Reconstruction from Images Taken from AN Uav

    Science.gov (United States)

    Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.

    2015-08-01

    We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.

  14. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    Directory of Open Access Journals (Sweden)

    Richard Chiou

    2010-06-01

    Full Text Available This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote controlling of the robots. The uniqueness of the project lies in making this process Internet-based, and remote robot operated and visualized in 3D. This 3D system approach provides the students with a more realistic feel of the 3D robotic laboratory even though they are working remotely. As a result, the 3D visualization technology has been tested as part of a laboratory in the MET 205 Robotics and Mechatronics class and has received positive feedback by most of the students. This type of research has introduced a new level of realism and visual communications to online laboratory learning in a remote classroom.

  15. Real-time Stereoscopic 3D for E-Robotics Learning

    Directory of Open Access Journals (Sweden)

    Richard Y. Chiou

    2011-02-01

    Full Text Available Following the design and testing of a successful 3-Dimensional surveillance system, this 3D scheme has been implemented into online robotics learning at Drexel University. A real-time application, utilizing robot controllers, programmable logic controllers and sensors, has been developed in the “MET 205 Robotics and Mechatronics” class to provide the students with a better robotic education. The integration of the 3D system allows the students to precisely program the robot and execute functions remotely. Upon the students’ recommendation, polarization has been chosen to be the main platform behind the 3D robotic system. Stereoscopic calculations are carried out for calibration purposes to display the images with the highest possible comfort-level and 3D effect. The calculations are further validated by comparing the results with students’ evaluations. Due to the Internet-based feature, multiple clients have the opportunity to perform the online automation development. In the future, students, in different universities, will be able to cross-control robotic components of different types around the world. With the development of this 3D ERobotics interface, automation resources and robotic learning can be shared and enriched regardless of location.

  16. 3D Hand Gesture Analysis through a Real-Time Gesture Search Engine

    Directory of Open Access Journals (Sweden)

    Shahrouz Yousefi

    2015-06-01

    Full Text Available 3D gesture recognition and tracking are highly desired features of interaction design in future mobile and smart environments. Specifically, in virtual/augmented reality applications, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities such as touchscreens. In this paper, we introduce a novel solution for real-time 3D gesture-based interaction by finding the best match from an extremely large gesture database. This database includes images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique matching algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query frames and database and retrieving the best match. Once the best match is found from the database in each moment, the pre-recorded 3D motion parameters can instantly be used for natural interaction. The proposed bare-hand interaction technology performs in real time with high accuracy using an ordinary camera.

  17. Design of an Embedded Multi-Camera Vision System—A Case Study in Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Valter Costa

    2018-02-01

    Full Text Available The purpose of this work is to explore the design principles for a Real-Time Robotic Multi Camera Vision System, in a case study involving a real world competition of autonomous driving. Design practices from vision and real-time research areas are applied into a Real-Time Robotic Vision application, thus exemplifying good algorithm design practices, the advantages of employing the “zero copy one pass” methodology and associated trade-offs leading to the selection of a controller platform. The vision tasks under study are: (i recognition of a “flat” signal; and (ii track following, requiring 3D reconstruction. This research firstly improves the used algorithms for the mentioned tasks and finally selects the controller hardware. Optimization for the shown algorithms yielded from 1.5 times to 190 times improvements, always with acceptable quality for the target application, with algorithm optimization being more important on lower computing power platforms. Results also include a 3-cm and five-degree accuracy for lane tracking and 100% accuracy for signalling panel recognition, which are better than most results found in the literature for this application. Clear results comparing different PC platforms for the mentioned Robotic Vision tasks are also shown, demonstrating trade-offs between accuracy and computing power, leading to the proper choice of control platform. The presented design principles are portable to other applications, where Real-Time constraints exist.

  18. Real-time microscopic 3D shape measurement based on optimized pulse-width-modulation binary fringe projection

    Science.gov (United States)

    Hu, Yan; Chen, Qian; Feng, Shijie; Tao, Tianyang; Li, Hui; Zuo, Chao

    2017-07-01

    In recent years, tremendous progress has been made in 3D measurement techniques, contributing to the realization of faster and more accurate 3D measurement. As a representative of these techniques, fringe projection profilometry (FPP) has become a commonly used method for real-time 3D measurement, such as real-time quality control and online inspection. To date, most related research has been concerned with macroscopic 3D measurement, but microscopic 3D measurement, especially real-time microscopic 3D measurement, is rarely reported. However, microscopic 3D measurement plays an important role in 3D metrology and is indispensable in some applications in measuring micro scale objects like the accurate metrology of MEMS components of the final devices to ensure their proper performance. In this paper, we proposed a method which effectively combines optimized binary structured patterns with a number-theoretical phase unwrapping algorithm to realize real-time microscopic 3D measurement. A slight defocusing of our optimized binary patterns can considerably alleviate the measurement error based on four-step phase-shifting FPP, providing the binary patterns with a comparable performance to ideal sinusoidal patterns. The static measurement accuracy can reach 8 μm, and the experimental results of a vibrating earphone diaphragm reveal that our system can successfully realize real-time 3D measurement of 120 frames per second (FPS) with a measurement range of 8~\\text{mm}× 6~\\text{mm} in lateral and 8 mm in depth.

  19. Feasibility of the integration of CRONOS, a 3-D neutronics code, into real-time simulators

    International Nuclear Information System (INIS)

    Ragusa, J.C.

    2001-01-01

    In its effort to contribute to nuclear power plant safety, CEA proposes the integration of an engineering grade 3-D neutronics code into a real-time plant analyser. This paper describes the capabilities of the neutronics code CRONOS to achieve a fast running performance. First, we will present current core models in simulators and explain their drawbacks. Secondly, the mean features of CRONOS's spatial-kinetics methods will be reviewed. We will then present an optimum core representation with respect to mesh size, choice of finite elements (FE) basis and execution time, for accurate results as well as the multi 1-D thermal-hydraulics (T/H) model developed to take into account 3-D effects in updating the cross-sections. A Main Steam Line Break (MSLB) End-of-Life (EOL) Hot-Zero-Power (HZP) accident will be used as an example, before we conclude with the perspectives of integrating CRONOS's 3-D core model into real-time simulators. (author)

  20. A real-time 3D scanning system for pavement distortion inspection

    International Nuclear Information System (INIS)

    Li, Qingguang; Yao, Ming; Yao, Xun; Xu, Bugao

    2010-01-01

    Pavement distortions, such as rutting and shoving, are the common pavement distress problems that need to be inspected and repaired in a timely manner to ensure ride quality and traffic safety. This paper introduces a real-time, low-cost inspection system devoted to detecting these distress features using high-speed 3D transverse scanning techniques. The detection principle is the dynamic generation and characterization of the 3D pavement profile based on structured light triangulation. To improve the accuracy of the system, a multi-view coplanar scheme is employed in the calibration procedure so that more feature points can be used and distributed across the field of view of the camera. A sub-pixel line extraction method is applied for the laser stripe location, which includes filtering, edge detection and spline interpolation. The pavement transverse profile is then generated from the laser stripe curve and approximated by line segments. The second-order derivatives of the segment endpoints are used to identify the feature points of possible distortions. The system can output the real-time measurements and 3D visualization of rutting and shoving distress in a scanned pavement

  1. Feasibility of the integration of CRONOS, a 3-D neutronics code, into real-time simulators

    Energy Technology Data Exchange (ETDEWEB)

    Ragusa, J.C. [CEA Saclay, Dept. de Mecanique et de Technologie, 91 - Gif-sur-Yvette (France)

    2001-07-01

    In its effort to contribute to nuclear power plant safety, CEA proposes the integration of an engineering grade 3-D neutronics code into a real-time plant analyser. This paper describes the capabilities of the neutronics code CRONOS to achieve a fast running performance. First, we will present current core models in simulators and explain their drawbacks. Secondly, the mean features of CRONOS's spatial-kinetics methods will be reviewed. We will then present an optimum core representation with respect to mesh size, choice of finite elements (FE) basis and execution time, for accurate results as well as the multi 1-D thermal-hydraulics (T/H) model developed to take into account 3-D effects in updating the cross-sections. A Main Steam Line Break (MSLB) End-of-Life (EOL) Hot-Zero-Power (HZP) accident will be used as an example, before we conclude with the perspectives of integrating CRONOS's 3-D core model into real-time simulators. (author)

  2. Real-time tracking with a 3D-Flow processor array

    International Nuclear Information System (INIS)

    Crosetto, D.

    1993-06-01

    The problem of real-time track-finding has been performed to date with CAM (Content Addressable Memories) or with fast coincidence logic, because the processing scheme was thought to have much slower performance. Advances in technology together with a new architectural approach make it feasible to also explore the computing technique for real-time track finding thus giving the advantages of implementing algorithms that can find more parameters such as calculate the sagitta, curvature, pt, etc., with respect to the CAM approach. The report describes real-time track finding using new computing approach technique based on the 3D-Flow array processor system. This system consists of a fixed interconnection architecture scheme, allowing flexible algorithm implementation on a scalable platform. The 3D-Flow parallel processing system for track finding is scalable in size and performance by either increasing the number of processors, or increasing the speed or else the number of pipelined stages. The present article describes the conceptual idea and the design stage of the project

  3. Real-time tracking with a 3D-flow processor array

    International Nuclear Information System (INIS)

    Crosetto, D.

    1993-01-01

    The problem of real-time track-finding has been performed to date with CAM (Content Addressable Memories) or with fast coincidence logic, because the processing scheme was though to have much slower performance. Advances in technology together with a new architectural approach make it feasible to also explore the computing technique for real-time track finding thus giving the advantages of implementing algorithms that can find more parameters such as calculate the sagitta, curvature, pt, etc. with respect to the CAM approach. This report describes real-time track finding using a new computing approach technique based on the 3D-flow array processor system. This system consists of a fixed interconnection architexture scheme, allowing flexible algorithm implementation on a scalable platform. The 3D-Flow parallel processing system for track finding is scalable in size and performance by either increasing the number of processors, or increasing the speed or else the number of pipelined stages. The present article describes the conceptual idea and the design stage of the project

  4. ArtifactVis2: Managing real-time archaeological data in immersive 3D environments

    KAUST Repository

    Smith, Neil

    2013-10-01

    In this paper, we present a stereoscopic research and training environment for archaeologists called ArtifactVis2. This application enables the management and visualization of diverse types of cultural datasets within a collaborative virtual 3D system. The archaeologist is fully immersed in a large-scale visualization of on-going excavations. Massive 3D datasets are seamlessly rendered in real-time with field recorded GIS data, 3D artifact scans and digital photography. Dynamic content can be visualized and cultural analytics can be performed on archaeological datasets collected through a rigorous digital archaeological methodology. The virtual collaborative environment provides a menu driven query system and the ability to annotate, markup, measure, and manipulate any of the datasets. These features enable researchers to re-experience and analyze the minute details of an archaeological site\\'s excavation. It enhances their visual capacity to recognize deep patterns and structures and perceive changes and reoccurrences. As a complement and development from previous work in the field of 3D immersive archaeological environments, ArtifactVis2 provides a GIS based immersive environment that taps directly into archaeological datasets to investigate cultural and historical issues of ancient societies and cultural heritage in ways not possible before. © 2013 IEEE.

  5. 2D array transducers for real-time 3D ultrasound guidance of interventional devices

    Science.gov (United States)

    Light, Edward D.; Smith, Stephen W.

    2009-02-01

    We describe catheter ring arrays for real-time 3D ultrasound guidance of devices such as vascular grafts, heart valves and vena cava filters. We have constructed several prototypes operating at 5 MHz and consisting of 54 elements using the W.L. Gore & Associates, Inc. micro-miniature ribbon cables. We have recently constructed a new transducer using a braided wiring technology from Precision Interconnect. This transducer consists of 54 elements at 4.8 MHz with pitch of 0.20 mm and typical -6 dB bandwidth of 22%. In all cases, the transducer and wiring assembly were integrated with an 11 French catheter of a Cook Medical deployment device for vena cava filters. Preliminary in vivo and in vitro testing is ongoing including simultaneous 3D ultrasound and x-ray fluoroscopy.

  6. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    Science.gov (United States)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  7. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    OpenAIRE

    Richard Chiou; Yongjin (james) Kwon; Tzu-Liang (bill) Tseng; Robin Kizirian; Yueh-Ting Yang

    2010-01-01

    This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote c...

  8. Demo: Distributed Real-Time Generative 3D Hand Tracking using Edge GPGPU Acceleration

    DEFF Research Database (Denmark)

    Qammaz, Ammar; Kosta, Sokol; Kyriazis, Nikolaos

    2018-01-01

    computations locally. The network connection takes the place of a GPGPU accelerator and sharing resources with a larger workstation becomes the acceleration mechanism. The unique properties of a generative optimizer are examined and constitute a challenging use-case, since the requirement for real......This work demonstrates a real-time 3D hand tracking application that runs via computation offloading. The proposed framework enables the application to run on low-end mobile devices such as laptops and tablets, despite the fact that they lack the sufficient hardware to perform the required...

  9. Real-time interactive 3D manipulation of particles viewed in two orthogonal observation planes

    DEFF Research Database (Denmark)

    Perch-Nielsen, I.; Rodrigo, P.J.; Glückstad, J.

    2005-01-01

    The generalized phase contrast (GPC) method has been applied to transform a single TEM00 beam into a manifold of counterpropagating-beam traps capable of real-time interactive manipulation of multiple microparticles in three dimensions (3D). This paper reports on the use of low numerical aperture...... for imaging through each of the two opposing objective lenses. As a consequence of the large working distance, simultaneous monitoring of the trapped particles in a second orthogonal observation plane is demonstrated. (C) 2005 Optical Society of America....

  10. 3D Assessment of Features Associated With Transvalvular Aortic Regurgitation After TAVR: A Real-Time 3D TEE Study.

    Science.gov (United States)

    Shibayama, Kentaro; Mihara, Hirotsugu; Jilaihawi, Hasan; Berdejo, Javier; Harada, Kenji; Itabashi, Yuji; Siegel, Robert; Makkar, Raj R; Shiota, Takahiro

    2016-02-01

    This study of 3-dimensional (3D) transesophageal echocardiography (TEE) aimed to demonstrate features associated with transvalvular aortic regurgitation (AR) after transcatheter aortic valve replacement (TAVR) and to confirm the fact that a gap between the native aortic annulus and prosthesis is associated with paravalvular AR. The mechanism of AR after TAVR, particularly that of transvalvular AR, has not been evaluated adequately. All patients with severe aortic stenosis who underwent TAVR with the Sapien device (Edwards Lifesciences, Irvine, California) had 3D TEE of the pre-procedural native aortic annulus and the post-procedural prosthetic valve. In the 201 patients studied, the total AR was mild in 67 patients (33%), moderate in 21 patients (10%), and severe in no patients. There were 20 patients with transvalvular AR and 82 patients with paravalvular AR. Fourteen patients had both transvalvular and paravalvular AR. Patients with transvalvular AR had larger prosthetic expansion (p prosthetic shape at the prosthetic commissure level (p prosthetic commissures in relation to the native commissures, than the patients without transvalvular AR. Age (odds ratio [OR]: 1.05; 95% confidence interval [CI]: 1.01 to 1.09; p 3D TEE successfully demonstrated the features associated with transvalvular AR, such as large prosthetic expansion, elliptical prosthetic shape, and anti-anatomical position of prosthesis. Additionally, effective area oversizing was associated with paravalvular AR. Copyright © 2016 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  11. GPU acceleration towards real-time image reconstruction in 3D tomographic diffractive microscopy

    Science.gov (United States)

    Bailleul, J.; Simon, B.; Debailleul, M.; Liu, H.; Haeberlé, O.

    2012-06-01

    Phase microscopy techniques regained interest in allowing for the observation of unprepared specimens with excellent temporal resolution. Tomographic diffractive microscopy is an extension of holographic microscopy which permits 3D observations with a finer resolution than incoherent light microscopes. Specimens are imaged by a series of 2D holograms: their accumulation progressively fills the range of frequencies of the specimen in Fourier space. A 3D inverse FFT eventually provides a spatial image of the specimen. Consequently, acquisition then reconstruction are mandatory to produce an image that could prelude real-time control of the observed specimen. The MIPS Laboratory has built a tomographic diffractive microscope with an unsurpassed 130nm resolution but a low imaging speed - no less than one minute. Afterwards, a high-end PC reconstructs the 3D image in 20 seconds. We now expect an interactive system providing preview images during the acquisition for monitoring purposes. We first present a prototype implementing this solution on CPU: acquisition and reconstruction are tied in a producer-consumer scheme, sharing common data into CPU memory. Then we present a prototype dispatching some reconstruction tasks to GPU in order to take advantage of SIMDparallelization for FFT and higher bandwidth for filtering operations. The CPU scheme takes 6 seconds for a 3D image update while the GPU scheme can go down to 2 or > 1 seconds depending on the GPU class. This opens opportunities for 4D imaging of living organisms or crystallization processes. We also consider the relevance of GPU for 3D image interaction in our specific conditions.

  12. Real-time 3D radiation risk assessment supporting simulation of work in nuclear environments

    International Nuclear Information System (INIS)

    Szoke, I; Louka, M N; Bryntesen, T R; Bratteli, J; Edvardsen, S T; RøEitrheim, K K; Bodor, K

    2014-01-01

    This paper describes the latest developments at the Institute for Energy Technology (IFE) in Norway, in the field of real-time 3D (three-dimensional) radiation risk assessment for the support of work simulation in nuclear environments. 3D computer simulation can greatly facilitate efficient work planning, briefing, and training of workers. It can also support communication within and between work teams, and with advisors, regulators, the media and public, at all the stages of a nuclear installation’s lifecycle. Furthermore, it is also a beneficial tool for reviewing current work practices in order to identify possible gaps in procedures, as well as to support the updating of international recommendations, dissemination of experience, and education of the current and future generation of workers. IFE has been involved in research and development into the application of 3D computer simulation and virtual reality (VR) technology to support work in radiological environments in the nuclear sector since the mid 1990s. During this process, two significant software tools have been developed, the VRdose system and the Halden Planner, and a number of publications have been produced to contribute to improving the safety culture in the nuclear industry. This paper describes the radiation risk assessment techniques applied in earlier versions of the VRdose system and the Halden Planner, for visualising radiation fields and calculating dose, and presents new developments towards implementing a flexible and up-to-date dosimetric package in these 3D software tools, based on new developments in the field of radiation protection. The latest versions of these 3D tools are capable of more accurate risk estimation, permit more flexibility via a range of user choices, and are applicable to a wider range of irradiation situations than their predecessors. (paper)

  13. Real-time 3D radiation risk assessment supporting simulation of work in nuclear environments.

    Science.gov (United States)

    Szőke, I; Louka, M N; Bryntesen, T R; Bratteli, J; Edvardsen, S T; RøEitrheim, K K; Bodor, K

    2014-06-01

    This paper describes the latest developments at the Institute for Energy Technology (IFE) in Norway, in the field of real-time 3D (three-dimensional) radiation risk assessment for the support of work simulation in nuclear environments. 3D computer simulation can greatly facilitate efficient work planning, briefing, and training of workers. It can also support communication within and between work teams, and with advisors, regulators, the media and public, at all the stages of a nuclear installation's lifecycle. Furthermore, it is also a beneficial tool for reviewing current work practices in order to identify possible gaps in procedures, as well as to support the updating of international recommendations, dissemination of experience, and education of the current and future generation of workers.IFE has been involved in research and development into the application of 3D computer simulation and virtual reality (VR) technology to support work in radiological environments in the nuclear sector since the mid 1990s. During this process, two significant software tools have been developed, the VRdose system and the Halden Planner, and a number of publications have been produced to contribute to improving the safety culture in the nuclear industry.This paper describes the radiation risk assessment techniques applied in earlier versions of the VRdose system and the Halden Planner, for visualising radiation fields and calculating dose, and presents new developments towards implementing a flexible and up-to-date dosimetric package in these 3D software tools, based on new developments in the field of radiation protection. The latest versions of these 3D tools are capable of more accurate risk estimation, permit more flexibility via a range of user choices, and are applicable to a wider range of irradiation situations than their predecessors.

  14. 3D Printed "Earable" Smart Devices for Real-Time Detection of Core Body Temperature.

    Science.gov (United States)

    Ota, Hiroki; Chao, Minghan; Gao, Yuji; Wu, Eric; Tai, Li-Chia; Chen, Kevin; Matsuoka, Yasutomo; Iwai, Kosuke; Fahad, Hossain M; Gao, Wei; Nyein, Hnin Yin Yin; Lin, Liwei; Javey, Ali

    2017-07-28

    Real-time detection of basic physiological parameters such as blood pressure and heart rate is an important target in wearable smart devices for healthcare. Among these, the core body temperature is one of the most important basic medical indicators of fever, insomnia, fatigue, metabolic functionality, and depression. However, traditional wearable temperature sensors are based upon the measurement of skin temperature, which can vary dramatically from the true core body temperature. Here, we demonstrate a three-dimensional (3D) printed wearable "earable" smart device that is designed to be worn on the ear to track core body temperature from the tympanic membrane (i.e., ear drum) based on an infrared sensor. The device is fully integrated with data processing circuits and a wireless module for standalone functionality. Using this smart earable device, we demonstrate that the core body temperature can be accurately monitored regardless of the environment and activity of the user. In addition, a microphone and actuator are also integrated so that the device can also function as a bone conduction hearing aid. Using 3D printing as the fabrication method enables the device to be customized for the wearer for more personalized healthcare. This smart device provides an important advance in realizing personalized health care by enabling real-time monitoring of one of the most important medical parameters, core body temperature, employed in preliminary medical screening tests.

  15. Real-Time 3D Tracking and Reconstruction on Mobile Phones.

    Science.gov (United States)

    Prisacariu, Victor Adrian; Kähler, Olaf; Murray, David W; Reid, Ian D

    2015-05-01

    We present a novel framework for jointly tracking a camera in 3D and reconstructing the 3D model of an observed object. Due to the region based approach, our formulation can handle untextured objects, partial occlusions, motion blur, dynamic backgrounds and imperfect lighting. Our formulation also allows for a very efficient implementation which achieves real-time performance on a mobile phone, by running the pose estimation and the shape optimisation in parallel. We use a level set based pose estimation but completely avoid the, typically required, explicit computation of a global distance. This leads to tracking rates of more than 100 Hz on a desktop PC and 30 Hz on a mobile phone. Further, we incorporate additional orientation information from the phone's inertial sensor which helps us resolve the tracking ambiguities inherent to region based formulations. The reconstruction step first probabilistically integrates 2D image statistics from selected keyframes into a 3D volume, and then imposes coherency and compactness using a total variational regularisation term. The global optimum of the overall energy function is found using a continuous max-flow algorithm and we show that, similar to tracking, the integration of per voxel posteriors instead of likelihoods improves the precision and accuracy of the reconstruction.

  16. Realistic 3D Terrain Roaming and Real-Time Flight Simulation

    Science.gov (United States)

    Que, Xiang; Liu, Gang; He, Zhenwen; Qi, Guang

    2014-12-01

    This paper presents an integrate method, which can provide access to current status and the dynamic visible scanning topography, to enhance the interactive during the terrain roaming and real-time flight simulation. A digital elevation model and digital ortho-photo map data integrated algorithm is proposed as the base algorithm for our approach to build a realistic 3D terrain scene. A new technique with help of render to texture and head of display for generating the navigation pane is used. In the flight simulating, in order to eliminate flying "jump", we employs the multidimensional linear interpolation method to adjust the camera parameters dynamically and steadily. Meanwhile, based on the principle of scanning laser imaging, we draw pseudo color figures by scanning topography in different directions according to the real-time flying status. Simulation results demonstrate that the proposed algorithm is prospective for applications and the method can improve the effect and enhance dynamic interaction during the real-time flight.

  17. Autonomous and 3D real-time multi-beam manipulation in a microfluidic environment

    DEFF Research Database (Denmark)

    Perch-Nielsen, I.; Rodrigo, P.J.; Alonzo, C.A.

    2006-01-01

    The Generalized Phase Contrast (GPC) method of optical 3D manipulation has previously been used for controlled spatial manipulation of live biological specimen in real-time. These biological experiments were carried out over a time-span of several hours while an operator intermittently optimized...... the optical system. Here we present GPC-based optical micromanipulation in a microfluidic system where trapping experiments are computer-automated and thereby capable of running with only limited supervision. The system is able to dynamically detect living yeast cells using a computer-interfaced CCD camera......, and respond to this by instantly creating traps at positions of the spotted cells streaming at flow velocities that would be difficult for a human operator to handle. With the added ability to control flow rates, experiments were also carried out to confirm the theoretically predicted axially dependent...

  18. IPS – A SYSTEM FOR REAL-TIME NAVIGATION AND 3D MODELING

    Directory of Open Access Journals (Sweden)

    D. Grießbach

    2012-07-01

    Full Text Available fdaReliable navigation and 3D modeling is a necessary requirement for any autonomous system in real world scenarios. German Aerospace Center (DLR developed a system providing precise information about local position and orientation of a mobile platform as well as three-dimensional information about its environment in real-time. This system, called Integral Positioning System (IPS can be applied for indoor environments and outdoor environments. To achieve high precision, reliability, integrity and availability a multi-sensor approach was chosen. The important role of sensor data synchronization, system calibration and spatial referencing is emphasized because the data from several sensors has to be fused using a Kalman filter. A hardware operating system (HW-OS is presented, that facilitates the low-level integration of different interfaces. The benefit of this approach is an increased precision of synchronization at the expense of additional engineering costs. It will be shown that the additional effort is leveraged by the new design concept since the HW-OS methodology allows a proven, flexible and fast design process, a high re-usability of common components and consequently a higher reliability within the low-level sensor fusion. Another main focus of the paper is on IPS software. The DLR developed, implemented and tested a flexible and extensible software concept for data grabbing, efficient data handling, data preprocessing (e.g. image rectification being essential for thematic data processing. Standard outputs of IPS are a trajectory of the moving platform and a high density 3D point cloud of the current environment. This information is provided in real-time. Based on these results, information processing on more abstract levels can be executed.

  19. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    Science.gov (United States)

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  20. Spatiotemporal Segmentation and Modeling of the Mitral Valve in Real-Time 3D Echocardiographic Images.

    Science.gov (United States)

    Pouch, Alison M; Aly, Ahmed H; Lai, Eric K; Yushkevich, Natalie; Stoffers, Rutger H; Gorman, Joseph H; Cheung, Albert T; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2017-09-01

    Transesophageal echocardiography is the primary imaging modality for preoperative assessment of mitral valves with ischemic mitral regurgitation (IMR). While there are well known echocardiographic insights into the 3D morphology of mitral valves with IMR, such as annular dilation and leaflet tethering, less is understood about how quantification of valve dynamics can inform surgical treatment of IMR or predict short-term recurrence of the disease. As a step towards filling this knowledge gap, we present a novel framework for 4D segmentation and geometric modeling of the mitral valve in real-time 3D echocardiography (rt-3DE). The framework integrates multi-atlas label fusion and template-based medial modeling to generate quantitatively descriptive models of valve dynamics. The novelty of this work is that temporal consistency in the rt-3DE segmentations is enforced during both the segmentation and modeling stages with the use of groupwise label fusion and Kalman filtering. The algorithm is evaluated on rt-3DE data series from 10 patients: five with normal mitral valve morphology and five with severe IMR. In these 10 data series that total 207 individual 3DE images, each 3DE segmentation is validated against manual tracing and temporal consistency between segmentations is demonstrated. The ultimate goal is to generate accurate and consistent representations of valve dynamics that can both visually and quantitatively provide insight into normal and pathological valve function.

  1. An inexpensive underwater mine countermeasures simulator with real-time 3D after action review

    Directory of Open Access Journals (Sweden)

    Robert Stone

    2016-10-01

    Full Text Available This paper presents the results of a concept capability demonstration pilot study, the aim of which was to investigate how inexpensive gaming software and hardware technologies could be exploited in the development and evaluation of a simulator prototype for training Royal Navy mine clearance divers, specifically focusing on the detection and accurate reporting of the location and condition of underwater ordnance. The simulator was constructed using the Blender open source 3D modelling toolkit and game engine, and featured not only an interactive 3D editor for underwater scenario generation by instructors, but also a real-time, 3D After Action Review (AAR system for formative assessment and feedback. The simulated scenarios and AAR architecture were based on early human factors observations and briefings conducted at the UK's Defence Diving School (DDS, an organisation that provides basic military diving training for all Royal Navy and Army (Royal Engineers divers. An experimental pilot study was undertaken to determine whether or not basic navigational and mine detection components of diver performance could be improved as a result of exposing participants to the AAR system, delivered between simulated diving scenarios. The results suggest that the provision of AAR was accompanied by significant performance improvements in the positive identification of simulated underwater ordnance (in contrast to non-ordnance objects and on participants' description of their location, their immediate in-water or seabed context and their structural condition. Only marginal improvements were found with participants' navigational performance in terms of their deviation accuracies from a pre-programmed expert search path. Overall, this project contributes to the growing corpus of evidence supporting the development of simulators that demonstrate the value of exploiting open source gaming software and the significance of adopting established games design

  2. Real-Time 3D Face Acquisition Using Reconfigurable Hybrid Architecture

    Directory of Open Access Journals (Sweden)

    Mitéran Johel

    2007-01-01

    Full Text Available Acquiring 3D data of human face is a general problem which can be applied in face recognition, virtual reality, and many other applications. It can be solved using stereovision. This technique consists in acquiring data in three dimensions from two cameras. The aim is to implement an algorithmic chain which makes it possible to obtain a three-dimensional space from two two-dimensional spaces: two images coming from the two cameras. Several implementations have already been considered. We propose a new simple real-time implementation based on a hybrid architecture (FPGA-DSP, allowing to consider an embedded and reconfigurable processing. Then we show our method which provides depth map of face, dense and reliable, and which can be implemented on an embedded architecture. A various architecture study led us to a judicious choice allowing to obtain the desired result. The real-time data processing is implemented in an embedded architecture. We obtain a dense face disparity map, precise enough for considered applications (multimedia, virtual worlds, biometrics and using a reliable method.

  3. Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion.

    Science.gov (United States)

    Leimkuhler, Thomas; Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter

    2018-06-01

    We propose a system to infer binocular disparity from a monocular video stream in real-time. Different from classic reconstruction of physical depth in computer vision, we compute perceptually plausible disparity, that is numerically inaccurate, but results in a very similar overall depth impression with plausible overall layout, sharp edges, fine details and agreement between luminance and disparity. We use several simple monocular cues to estimate disparity maps and confidence maps of low spatial and temporal resolution in real-time. These are complemented by spatially-varying, appearance-dependent and class-specific disparity prior maps, learned from example stereo images. Scene classification selects this prior at runtime. Fusion of prior and cues is done by means of robust MAP inference on a dense spatio-temporal conditional random field with high spatial and temporal resolution. Using normal distributions allows this in constant-time, parallel per-pixel work. We compare our approach to previous 2D-to-3D conversion systems in terms of different metrics, as well as a user study and validate our notion of perceptually plausible disparity.

  4. Monitoring the effects of doxorubicin on 3D-spheroid tumor cells in real-time

    Directory of Open Access Journals (Sweden)

    Baek N

    2016-11-01

    Full Text Available NamHuk Baek,1,* Ok Won Seo,1,* MinSung Kim,1 John Hulme,2 Seong Soo A An2 1Department of R & D, NanoEntek Inc., Seoul, Republic of Korea; 2Department of BioNano Technology Gachon University, Gyeonggi-do, Republic of Korea *These authors contributed equally to this work Abstract: Recently, increasing numbers of cell culture experiments with 3D spheroids presented better correlating results in vivo than traditional 2D cell culture systems. 3D spheroids could offer a simple and highly reproducible model that would exhibit many characteristics of natural tissue, such as the production of extracellular matrix. In this paper numerous cell lines were screened and selected depending on their ability to form and maintain a spherical shape. The effects of increasing concentrations of doxorubicin (DXR on the integrity and viability of the selected spheroids were then measured at regular intervals and in real-time. In total 12 cell lines, adenocarcinomic alveolar basal epithelial (A549, muscle (C2C12, prostate (DU145, testis (F9, pituitary epithelial-like (GH3, cervical cancer (HeLa, HeLa contaminant (HEp2, embryo (NIH3T3, embryo (PA317, neuroblastoma (SH-SY5Y, osteosarcoma U2OS, and embryonic kidney cells (293T, were screened. Out of the 12, 8 cell lines, NIH3T3, C2C12, 293T, SH-SY5Y, A549, HeLa, PA317, and U2OS formed regular spheroids and the effects of DXR on these structures were measured at regular intervals. Finally, 5 cell lines, A549, HeLa, SH-SY5Y, U2OS, and 293T, were selected for real-time monitoring and the effects of DXR treatment on their behavior were continuously recorded for 5 days. A potential correlation regarding the effects of DXR on spheroid viability and ATP production was measured on days 1, 3, and 5. Cytotoxicity of DXR seemed to occur after endocytosis, since the cellular activities and ATP productions were still viable after 1 day of the treatment in all spheroids, except SH-SY5Y. Both cellular activity and ATP production were

  5. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    Science.gov (United States)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  6. SU-E-J-237: Real-Time 3D Anatomy Estimation From Undersampled MR Acquisitions

    Energy Technology Data Exchange (ETDEWEB)

    Glitzner, M; Lagendijk, J; Raaymakers, B; Crijns, S [University Medical Center Utrecht, Utrecht (Netherlands); Senneville, B Denis de [University Medical Center Utrecht, Utrecht (Netherlands); Mathematical Institute of Bordeaux, University of Bordeaux, Talence Cedex (France)

    2015-06-15

    Recent developments made MRI guided radiotherapy feasible. Performing simultaneous imaging during fractions can provide information about changing anatomy by means of deformable image registration for either immediate plan adaptations or accurate dose accumulation on the changing anatomy. In 3D MRI, however, acquisition time is considerable and scales with resolution. Furthermore, intra-scan motion degrades image quality.In this work, we investigate the sensitivity of registration quality on imageresolution: potentially, by employing spatial undersampling, the acquisition timeof MR images for the purpose of deformable image registration can be reducedsignificantly.On a volunteer, 3D-MR imaging data was sampled in a navigator-gated manner, acquiring one axial volume (360×260×100mm{sup 3}) per 3s during exhale phase. A T1-weighted FFE sequence was used with an acquired voxel size of (2.5mm{sup 3}) for a duration of 17min. Deformation vector fields were evaluated for 100 imaging cycles with respect to the initial anatomy using deformable image registration based on optical flow. Subsequently, the imaging data was downsampled by a factor of 2, simulating a fourfold acquisition speed. Displacements of the downsampled volumes were then calculated by the same process.In kidneyliver boundaries and the region around stomach/duodenum, prominent organ drifts could be observed in both the original and the downsampled imaging data. An increasing displacement of approximately 2mm was observed for the kidney, while an area around the stomach showed sudden displacements of 4mm. Comparison of the motile points over time showed high reproducibility between the displacements of high-resolution and downsampled volumes: over a 17min acquisition, the componentwise RMS error was not more than 0.38mm.Based on the synthetic experiments, 3D nonrigid image registration shows little sensitivity to image resolution and the displacement information is preserved even when halving the

  7. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    International Nuclear Information System (INIS)

    Kanphet, J; Suriyapee, S; Sanghangthum, T; Kumkhwao, J; Wisetrintong, M; Dumrongkijudom, N

    2016-01-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable. (paper)

  8. An inkjet-printed buoyant 3-D lagrangian sensor for real-time flood monitoring

    KAUST Repository

    Farooqui, Muhammad Fahad

    2014-06-01

    A 3-D (cube-shaped) Lagrangian sensor, inkjet printed on a paper substrate, is presented for the first time. The sensor comprises a transmitter chip with a microcontroller completely embedded in the cube, along with a $1.5 \\\\lambda 0 dipole that is uniquely implemented on all the faces of the cube to achieve a near isotropic radiation pattern. The sensor has been designed to operate both in the air as well as water (half immersed) for real-time flood monitoring. The sensor weighs 1.8 gm and measures 13 mm$\\\\,\\\\times\\\\,$ 13 mm$\\\\,\\\\times\\\\,$ 13 mm, and each side of the cube corresponds to only $0.1 \\\\lambda 0 (at 2.4 GHz). The printed circuit board is also inkjet-printed on paper substrate to make the sensor light weight and buoyant. Issues related to the bending of inkjet-printed tracks and integration of the transmitter chip in the cube are discussed. The Lagrangian sensor is designed to operate in a wireless sensor network and field tests have confirmed that it can communicate up to a distance of 100 m while in the air and up to 50 m while half immersed in water. © 1963-2012 IEEE.

  9. Real-time 3D visualization of cellular rearrangements during cardiac valve formation.

    Science.gov (United States)

    Pestel, Jenny; Ramadass, Radhan; Gauvrit, Sebastien; Helker, Christian; Herzog, Wiebke; Stainier, Didier Y R

    2016-06-15

    During cardiac valve development, the single-layered endocardial sheet at the atrioventricular canal (AVC) is remodeled into multilayered immature valve leaflets. Most of our knowledge about this process comes from examining fixed samples that do not allow a real-time appreciation of the intricacies of valve formation. Here, we exploit non-invasive in vivo imaging techniques to identify the dynamic cell behaviors that lead to the formation of the immature valve leaflets. We find that in zebrafish, the valve leaflets consist of two sets of endocardial cells at the luminal and abluminal side, which we refer to as luminal cells (LCs) and abluminal cells (ALCs), respectively. By analyzing cellular rearrangements during valve formation, we observed that the LCs and ALCs originate from the atrium and ventricle, respectively. Furthermore, we utilized Wnt/β-catenin and Notch signaling reporter lines to distinguish between the LCs and ALCs, and also found that cardiac contractility and/or blood flow is necessary for the endocardial expression of these signaling reporters. Thus, our 3D analyses of cardiac valve formation in zebrafish provide fundamental insights into the cellular rearrangements underlying this process. © 2016. Published by The Company of Biologists Ltd.

  10. A multi-frequency electrical impedance tomography system for real-time 2D and 3D imaging

    Science.gov (United States)

    Yang, Yunjie; Jia, Jiabin

    2017-08-01

    This paper presents the design and evaluation of a configurable, fast multi-frequency Electrical Impedance Tomography (mfEIT) system for real-time 2D and 3D imaging, particularly for biomedical imaging. The system integrates 32 electrode interfaces and the current frequency ranges from 10 kHz to 1 MHz. The system incorporates the following novel features. First, a fully adjustable multi-frequency current source with current monitoring function is designed. Second, a flexible switching scheme is developed for arbitrary sensing configuration and a semi-parallel data acquisition architecture is implemented for high-frame-rate data acquisition. Furthermore, multi-frequency digital quadrature demodulation is accomplished in a high-capacity Field Programmable Gate Array. At last, a 3D imaging software, visual tomography, is developed for real-time 2D and 3D image reconstruction, data analysis, and visualization. The mfEIT system is systematically tested and evaluated from the aspects of signal to noise ratio (SNR), frame rate, and 2D and 3D multi-frequency phantom imaging. The highest SNR is 82.82 dB on a 16-electrode sensor. The frame rate is up to 546 fps at serial mode and 1014 fps at semi-parallel mode. The evaluation results indicate that the presented mfEIT system is a powerful tool for real-time 2D and 3D imaging.

  11. A real-time 3D end-to-end augmented reality system (and its representation transformations)

    Science.gov (United States)

    Tytgat, Donny; Aerts, Maarten; De Busser, Jeroen; Lievens, Sammy; Rondao Alface, Patrice; Macq, Jean-Francois

    2016-09-01

    The new generation of HMDs coming to the market is expected to enable many new applications that allow free viewpoint experiences with captured video objects. Current applications usually rely on 3D content that is manually created or captured in an offline manner. In contrast, this paper focuses on augmented reality applications that use live captured 3D objects while maintaining free viewpoint interaction. We present a system that allows live dynamic 3D objects (e.g. a person who is talking) to be captured in real-time. Real-time performance is achieved by traversing a number of representation formats and exploiting their specific benefits. For instance, depth images are maintained for fast neighborhood retrieval and occlusion determination, while implicit surfaces are used to facilitate multi-source aggregation for both geometry and texture. The result is a 3D reconstruction system that outputs multi-textured triangle meshes at real-time rates. An end-to-end system is presented that captures and reconstructs live 3D data and allows for this data to be used on a networked (AR) device. For allocating the different functional blocks onto the available physical devices, a number of alternatives are proposed considering the available computational power and bandwidth for each of the components. As we will show, the representation format can play an important role in this functional allocation and allows for a flexible system that can support a highly heterogeneous infrastructure.

  12. Synthetic biology's tall order: Reconstruction of 3D, super resolution images of single molecules in real-time

    CSIR Research Space (South Africa)

    Henriques, R

    2010-08-31

    Full Text Available -to-use reconstruction software coupled with image acquisition. Here, we present QuickPALM, an Image plugin, enabling real-time reconstruction of 3D super-resolution images during acquisition and drift correction. We illustrate its application by reconstructing Cy5...

  13. Real-Time 3D Motion capture by monocular vision and virtual rendering

    OpenAIRE

    Gomez Jauregui , David Antonio; Horain , Patrick

    2012-01-01

    International audience; Avatars in networked 3D virtual environments allow users to interact over the Internet and to get some feeling of virtual telepresence. However, avatar control may be tedious. Motion capture systems based on 3D sensors have recently reached the consumer market, but webcams and camera-phones are more widespread and cheaper. The proposed demonstration aims at animating a user's avatar from real time 3D motion capture by monoscopic computer vision, thus allowing virtual t...

  14. Real-Time Climate Simulations in the Interactive 3D Game Universe Sandbox ²

    Science.gov (United States)

    Goldenson, N. L.

    2014-12-01

    Exploration in an open-ended computer game is an engaging way to explore climate and climate change. Everyone can explore physical models with real-time visualization in the educational simulator Universe Sandbox ² (universesandbox.com/2), which includes basic climate simulations on planets. I have implemented a time-dependent, one-dimensional meridional heat transport energy balance model to run and be adjustable in real time in the midst of a larger simulated system. Universe Sandbox ² is based on the original game - at its core a gravity simulator - with other new physically-based content for stellar evolution, and handling collisions between bodies. Existing users are mostly science enthusiasts in informal settings. We believe that this is the first climate simulation to be implemented in a professionally developed computer game with modern 3D graphical output in real time. The type of simple climate model we've adopted helps us depict the seasonal cycle and the more drastic changes that come from changing the orbit or other external forcings. Users can alter the climate as the simulation is running by altering the star(s) in the simulation, dragging to change orbits and obliquity, adjusting the climate simulation parameters directly or changing other properties like CO2 concentration that affect the model parameters in representative ways. Ongoing visuals of the expansion and contraction of sea ice and snow-cover respond to the temperature calculations, and make it accessible to explore a variety of scenarios and intuitive to understand the output. Variables like temperature can also be graphed in real time. We balance computational constraints with the ability to capture the physical phenomena we wish to visualize, giving everyone access to a simple open-ended meridional energy balance climate simulation to explore and experiment with. The software lends itself to labs at a variety of levels about climate concepts including seasons, the Greenhouse effect

  15. Multithreaded real-time 3D image processing software architecture and implementation

    Science.gov (United States)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  16. A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.

    Science.gov (United States)

    Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad

    2012-01-01

    The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications.

  17. Portable high-intensity focused ultrasound system with 3D electronic steering, real-time cavitation monitoring, and 3D image reconstruction algorithms: a preclinical study in pigs

    International Nuclear Information System (INIS)

    Choi, Jin Woo; Lee, Jae Young; Hwang, Eui Jin; Hwang, In Pyeong; Woo, Sung Min; Lee, Chang Joo; Park, Eun Joo; Choi, Byung Ihn

    2014-01-01

    The aim of this study was to evaluate the safety and accuracy of a new portable ultrasonography-guided high-intensity focused ultrasound (USg-HIFU) system with a 3-dimensional (3D) electronic steering transducer, a simultaneous ablation and imaging module, real-time cavitation monitoring, and 3D image reconstruction algorithms. To address the accuracy of the transducer, hydrophones in a water chamber were used to assess the generation of sonic fields. An animal study was also performed in five pigs by ablating in vivo thighs by single-point sonication (n=10) or volume sonication (n=10) and ex vivo kidneys by single-point sonication (n=10). Histological and statistical analyses were performed. In the hydrophone study, peak voltages were detected within 1.0 mm from the targets on the y- and z-axes and within 2.0-mm intervals along the x-axis (z-axis, direction of ultrasound propagation; y- and x-axes, perpendicular to the direction of ultrasound propagation). Twenty-nine of 30 HIFU sessions successfully created ablations at the target. The in vivo porcine thigh study showed only a small discrepancy (width, 0.5-1.1 mm; length, 3.0 mm) between the planning ultrasonograms and the pathological specimens. Inordinate thermal damage was not observed in the adjacent tissues or sonic pathways in the in vivo thigh and ex vivo kidney studies. Our study suggests that this new USg-HIFU system may be a safe and accurate technique for ablating soft tissues and encapsulated organs.

  18. Portable high-intensity focused ultrasound system with 3D electronic steering, real-time cavitation monitoring, and 3D image reconstruction algorithms: a preclinical study in pigs

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jin Woo; Lee, Jae Young; Hwang, Eui Jin; Hwang, In Pyeong; Woo, Sung Min; Lee, Chang Joo; Park, Eun Joo; Choi, Byung Ihn [Dept. of Radiology and Institute of Radiation Medicine, Seoul National University Hospital, Seoul (Korea, Republic of)

    2014-10-15

    The aim of this study was to evaluate the safety and accuracy of a new portable ultrasonography-guided high-intensity focused ultrasound (USg-HIFU) system with a 3-dimensional (3D) electronic steering transducer, a simultaneous ablation and imaging module, real-time cavitation monitoring, and 3D image reconstruction algorithms. To address the accuracy of the transducer, hydrophones in a water chamber were used to assess the generation of sonic fields. An animal study was also performed in five pigs by ablating in vivo thighs by single-point sonication (n=10) or volume sonication (n=10) and ex vivo kidneys by single-point sonication (n=10). Histological and statistical analyses were performed. In the hydrophone study, peak voltages were detected within 1.0 mm from the targets on the y- and z-axes and within 2.0-mm intervals along the x-axis (z-axis, direction of ultrasound propagation; y- and x-axes, perpendicular to the direction of ultrasound propagation). Twenty-nine of 30 HIFU sessions successfully created ablations at the target. The in vivo porcine thigh study showed only a small discrepancy (width, 0.5-1.1 mm; length, 3.0 mm) between the planning ultrasonograms and the pathological specimens. Inordinate thermal damage was not observed in the adjacent tissues or sonic pathways in the in vivo thigh and ex vivo kidney studies. Our study suggests that this new USg-HIFU system may be a safe and accurate technique for ablating soft tissues and encapsulated organs.

  19. ArtifactVis2: Managing real-time archaeological data in immersive 3D environments

    KAUST Repository

    Smith, Neil; Knabb, Kyle; Defanti, Connor; Weber, Philip P.; Schulze, Jü rgen P.; Prudhomme, Andrew; Kuester, Falko; Levy, Thomas E.; Defanti, Thomas A.

    2013-01-01

    In this paper, we present a stereoscopic research and training environment for archaeologists called ArtifactVis2. This application enables the management and visualization of diverse types of cultural datasets within a collaborative virtual 3D

  20. Esophagogastric Junction pressure morphology: comparison between a station pull-through and real-time 3D-HRM representation.

    Science.gov (United States)

    Nicodème, F; Lin, Z; Pandolfino, J E; Kahrilas, P J

    2013-09-01

    Esophagogastric junction (EGJ) competence is the fundamental defense against reflux making it of great clinical significance. However, characterizing EGJ competence with conventional manometric methodologies has been confounded by its anatomic and physiological complexity. Recent technological advances in miniaturization and electronics have led to the development of a novel device that may overcome these challenges. Nine volunteer subjects were studied with a novel 3D-HRM device providing 7.5 mm axial and 45° radial pressure resolution within the EGJ. Real-time measurements were made at rest and compared to simulations of a conventional pull-through made with the same device. Moreover, 3D-HRM recordings were analyzed to differentiate contributing pressure signals within the EGJ attributable to lower esophageal sphincter (LES), diaphragm, and vasculature. 3D-HRM recordings suggested that sphincter length assessed by a pull-through method greatly exaggerated the estimate of LES length by failing to discriminate among circumferential contractile pressure and asymmetric extrinsic pressure signals attributable to diaphragmatic and vascular structures. Real-time 3D EGJ recordings found that the dominant constituents of EGJ pressure at rest were attributable to the diaphragm. 3D-HRM permits real-time recording of EGJ pressure morphology facilitating analysis of the EGJ constituents responsible for its function as a reflux barrier making it a promising tool in the study of GERD pathophysiology. The enhanced axial and radial recording resolution of the device should facilitate further studies to explore perturbations in the physiological constituents of EGJ pressure in health and disease. © 2013 John Wiley & Sons Ltd.

  1. A Bayesian approach to real-time 3D tumor localization via monoscopic x-ray imaging during treatment delivery

    International Nuclear Information System (INIS)

    Li, Ruijiang; Fahimian, Benjamin P.; Xing, Lei

    2011-01-01

    Purpose: Monoscopic x-ray imaging with on-board kV devices is an attractive approach for real-time image guidance in modern radiation therapy such as VMAT or IMRT, but it falls short in providing reliable information along the direction of imaging x-ray. By effectively taking consideration of projection data at prior times and/or angles through a Bayesian formalism, the authors develop an algorithm for real-time and full 3D tumor localization with a single x-ray imager during treatment delivery. Methods: First, a prior probability density function is constructed using the 2D tumor locations on the projection images acquired during patient setup. Whenever an x-ray image is acquired during the treatment delivery, the corresponding 2D tumor location on the imager is used to update the likelihood function. The unresolved third dimension is obtained by maximizing the posterior probability distribution. The algorithm can also be used in a retrospective fashion when all the projection images during the treatment delivery are used for 3D localization purposes. The algorithm does not involve complex optimization of any model parameter and therefore can be used in a ''plug-and-play'' fashion. The authors validated the algorithm using (1) simulated 3D linear and elliptic motion and (2) 3D tumor motion trajectories of a lung and a pancreas patient reproduced by a physical phantom. Continuous kV images were acquired over a full gantry rotation with the Varian TrueBeam on-board imaging system. Three scenarios were considered: fluoroscopic setup, cone beam CT setup, and retrospective analysis. Results: For the simulation study, the RMS 3D localization error is 1.2 and 2.4 mm for the linear and elliptic motions, respectively. For the phantom experiments, the 3D localization error is < 1 mm on average and < 1.5 mm at 95th percentile in the lung and pancreas cases for all three scenarios. The difference in 3D localization error for different scenarios is small and is not

  2. Optimal transcostal high-intensity focused ultrasound with combined real-time 3D movement tracking and correction

    International Nuclear Information System (INIS)

    Marquet, F; Aubry, J F; Pernot, M; Fink, M; Tanter, M

    2011-01-01

    Recent studies have demonstrated the feasibility of transcostal high intensity focused ultrasound (HIFU) treatment in liver. However, two factors limit thermal necrosis of the liver through the ribs: the energy deposition at focus is decreased by the respiratory movement of the liver and the energy deposition on the skin is increased by the presence of highly absorbing bone structures. Ex vivo ablations were conducted to validate the feasibility of a transcostal real-time 3D movement tracking and correction mode. Experiments were conducted through a chest phantom made of three human ribs immersed in water and were placed in front of a 300 element array working at 1 MHz. A binarized apodization law introduced recently in order to spare the rib cage during treatment has been extended here with real-time electronic steering of the beam. Thermal simulations have been conducted to determine the steering limits. In vivo 3D-movement detection was performed on pigs using an ultrasonic sequence. The maximum error on the transcostal motion detection was measured to be 0.09 ± 0.097 mm on the anterior–posterior axis. Finally, a complete sequence was developed combining real-time 3D transcostal movement correction and spiral trajectory of the HIFU beam, allowing the system to treat larger areas with optimized efficiency. Lesions as large as 1 cm in diameter have been produced at focus in excised liver, whereas no necroses could be obtained with the same emitted power without correcting the movement of the tissue sample.

  3. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    Directory of Open Access Journals (Sweden)

    Jin Qi

    Full Text Available Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  4. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    Science.gov (United States)

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  5. Real-Time Large Scale 3d Reconstruction by Fusing Kinect and Imu Data

    Science.gov (United States)

    Huai, J.; Zhang, Y.; Yilmaz, A.

    2015-08-01

    Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still very challenging. In this paper, we present a mapping system for 3D reconstruction that fuses measurements from a Kinect and an inertial measurement unit (IMU) to estimate motion. Our major achievements include: (i) Large scale consistent 3D reconstruction is realized by volume shifting and loop closure; (ii) The coarse-to-fine iterative closest point (ICP) algorithm, the SIFT odometry, and IMU odometry are combined to robustly and precisely estimate pose. In particular, ICP runs routinely to track the Kinect motion. If ICP fails in planar areas, the SIFT odometry provides incremental motion estimate. If both ICP and the SIFT odometry fail, e.g., upon abrupt motion or inadequate features, the incremental motion is estimated by the IMU. Additionally, the IMU also observes the roll and pitch angles which can reduce long-term drift of the sensor assembly. In experiments on a consumer laptop, our system estimates motion at 8Hz on average while integrating color images to the local map and saving volumes of meshes concurrently. Moreover, it is immune to tracking failures, and has smaller drift than the state-of-the-art systems in large scale reconstruction.

  6. Touring Mars Online, Real-time, in 3D for Math and Science Educators and Students

    Science.gov (United States)

    Jones, Greg; Kalinowski, Kevin

    2007-01-01

    This article discusses a project that placed over 97% of Mars' topography made available from NASA into an interactive 3D multi-user online learning environment beginning in 2003. In 2005 curriculum materials that were created to support middle school math and science education were developed. Research conducted at the University of North Texas…

  7. An inkjet-printed buoyant 3-D lagrangian sensor for real-time flood monitoring

    KAUST Repository

    Farooqui, Muhammad Fahad; Claudel, Christian G.; Shamim, Atif

    2014-01-01

    A 3-D (cube-shaped) Lagrangian sensor, inkjet printed on a paper substrate, is presented for the first time. The sensor comprises a transmitter chip with a microcontroller completely embedded in the cube, along with a $1.5 \\lambda 0 dipole

  8. Embedded, real-time UAV control for improved, image-based 3D scene reconstruction

    Science.gov (United States)

    Jean Liénard; Andre Vogs; Demetrios Gatziolis; Nikolay Strigul

    2016-01-01

    Unmanned Aerial Vehicles (UAVs) are already broadly employed for 3D modeling of large objects such as trees and monuments via photogrammetry. The usual workflow includes two distinct steps: image acquisition with UAV and computationally demanding postflight image processing. Insufficient feature overlaps across images is a common shortcoming in post-flight image...

  9. QuickPALM: 3D real-time photoactivation nanoscopy image processing in ImageJ

    CSIR Research Space (South Africa)

    Henriques, R

    2010-05-01

    Full Text Available QuickPALM in conjunction with the acquisition of control features provides a complete solution for the acquisition, reconstruction and visualization of 3D PALM or STORM images, achieving resolutions of ~40 nm in real time. This software package...

  10. Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing

    Directory of Open Access Journals (Sweden)

    Mingchi Feng

    2017-10-01

    Full Text Available Multi-camera systems are widely applied in the three dimensional (3D computer vision, especially when multiple cameras are distributed on both sides of the measured object. The calibration methods of multi-camera systems are critical to the accuracy of vision measurement and the key is to find an appropriate calibration target. In this paper, a high-precision camera calibration method for multi-camera systems based on transparent glass checkerboards and ray tracing is described, and is used to calibrate multiple cameras distributed on both sides of the glass checkerboard. Firstly, the intrinsic parameters of each camera are obtained by Zhang’s calibration method. Then, multiple cameras capture several images from the front and back of the glass checkerboard with different orientations, and all images contain distinct grid corners. As the cameras on one side are not affected by the refraction of glass checkerboard, extrinsic parameters can be directly calculated. However, the cameras on the other side are influenced by the refraction of glass checkerboard, and the direct use of projection model will produce a calibration error. A multi-camera calibration method using refractive projection model and ray tracing is developed to eliminate this error. Furthermore, both synthetic and real data are employed to validate the proposed approach. The experimental results of refractive calibration show that the error of the 3D reconstruction is smaller than 0.2 mm, the relative errors of both rotation and translation are less than 0.014%, and the mean and standard deviation of reprojection error of the four-camera system are 0.00007 and 0.4543 pixels, respectively. The proposed method is flexible, highly accurate, and simple to carry out.

  11. Real-time geometric scene estimation for RGBD images using a 3D box shape grammar

    Science.gov (United States)

    Willis, Andrew R.; Brink, Kevin M.

    2016-06-01

    This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: (1) it divides the geometric estimation procedure into subroutines having atomic incremental computational costs, and (2) it uses a generative "Block World" perceptual model that infers both concave and convex box elements from detection of primitive box substructures. The end result is an efficient geometry processing engine suitable for use in real-time embedded systems such as those on an UAVs where it is intended to be an integral component for robotic navigation and mapping applications.

  12. 3D-SURFER 2.0: web platform for real-time search and characterization of protein surfaces.

    Science.gov (United States)

    Xiong, Yi; Esquivel-Rodriguez, Juan; Sael, Lee; Kihara, Daisuke

    2014-01-01

    The increasing number of uncharacterized protein structures necessitates the development of computational approaches for function annotation using the protein tertiary structures. Protein structure database search is the basis of any structure-based functional elucidation of proteins. 3D-SURFER is a web platform for real-time protein surface comparison of a given protein structure against the entire PDB using 3D Zernike descriptors. It can smoothly navigate the protein structure space in real-time from one query structure to another. A major new feature of Release 2.0 is the ability to compare the protein surface of a single chain, a single domain, or a single complex against databases of protein chains, domains, complexes, or a combination of all three in the latest PDB. Additionally, two types of protein structures can now be compared: all-atom-surface and backbone-atom-surface. The server can also accept a batch job for a large number of database searches. Pockets in protein surfaces can be identified by VisGrid and LIGSITE (csc) . The server is available at http://kiharalab.org/3d-surfer/.

  13. A Smartphone Interface for a Wireless EEG Headset with Real-Time 3D Reconstruction

    DEFF Research Database (Denmark)

    Stopczynski, Arkadiusz; Larsen, Jakob Eg; Stahlhut, Carsten

    2011-01-01

    We demonstrate a fully functional handheld brain scanner consisting of a low-cost 14-channel EEG headset with a wireless connec- tion to a smartphone, enabling minimally invasive EEG monitoring in naturalistic settings. The smartphone provides a touch-based interface with real-time brain state...

  14. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    International Nuclear Information System (INIS)

    Ravindran, V. R.; Sreelakshmi, C.; Vibin

    2008-01-01

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  15. Real-time viability and apoptosis kinetic detection method of 3D multicellular tumor spheroids using the Celigo Image Cytometer.

    Science.gov (United States)

    Kessel, Sarah; Cribbes, Scott; Bonasu, Surekha; Rice, William; Qiu, Jean; Chan, Leo Li-Ying

    2017-09-01

    The development of three-dimensional (3D) multicellular tumor spheroid models for cancer drug discovery research has increased in the recent years. The use of 3D tumor spheroid models may be more representative of the complex in vivo tumor microenvironments in comparison to two-dimensional (2D) assays. Currently, viability of 3D multicellular tumor spheroids has been commonly measured on standard plate-readers using metabolic reagents such as CellTiter-Glo® for end point analysis. Alternatively, high content image cytometers have been used to measure drug effects on spheroid size and viability. Previously, we have demonstrated a novel end point drug screening method for 3D multicellular tumor spheroids using the Celigo Image Cytometer. To better characterize the cancer drug effects, it is important to also measure the kinetic cytotoxic and apoptotic effects on 3D multicellular tumor spheroids. In this work, we demonstrate the use of PI and caspase 3/7 stains to measure viability and apoptosis for 3D multicellular tumor spheroids in real-time. The method was first validated by staining different types of tumor spheroids with PI and caspase 3/7 and monitoring the fluorescent intensities for 16 and 21 days. Next, PI-stained and nonstained control tumor spheroids were digested into single cell suspension to directly measure viability in a 2D assay to determine the potential toxicity of PI. Finally, extensive data analysis was performed on correlating the time-dependent PI and caspase 3/7 fluorescent intensities to the spheroid size and necrotic core formation to determine an optimal starting time point for cancer drug testing. The ability to measure real-time viability and apoptosis is highly important for developing a proper 3D model for screening tumor spheroids, which can allow researchers to determine time-dependent drug effects that usually are not captured by end point assays. This would improve the current tumor spheroid analysis method to potentially better

  16. Poster: A Software-Defined Multi-Camera Network

    OpenAIRE

    Chen, Po-Yen; Chen, Chien; Selvaraj, Parthiban; Claesen, Luc

    2016-01-01

    The widespread popularity of OpenFlow leads to a significant increase in the number of applications developed in SoftwareDefined Networking (SDN). In this work, we propose the architecture of a Software-Defined Multi-Camera Network consisting of small, flexible, economic, and programmable cameras which combine the functions of the processor, switch, and camera. A Software-Defined Multi-Camera Network can effectively reduce the overall network bandwidth and reduce a large amount of the Capex a...

  17. Real-Time and High-Resolution 3D Face Measurement via a Smart Active Optical Sensor.

    Science.gov (United States)

    You, Yong; Shen, Yang; Zhang, Guocai; Xing, Xiuwen

    2017-03-31

    The 3D measuring range and accuracy in traditional active optical sensing, such as Fourier transform profilometry, are influenced by the zero frequency of the captured patterns. The phase-shifting technique is commonly applied to remove the zero component. However, this phase-shifting method must capture several fringe patterns with phase difference, thereby influencing the real-time performance. This study introduces a smart active optical sensor, in which a composite pattern is utilized. The composite pattern efficiently combines several phase-shifting fringes and carrier frequencies. The method can remove zero frequency by using only one pattern. Model face reconstruction and human face measurement were employed to study the validity and feasibility of this method. Results show no distinct decrease in the precision of the novel method unlike the traditional phase-shifting method. The texture mapping technique was utilized to reconstruct a nature-appearance 3D digital face.

  18. 3D printing and milling a real-time PCR device for infectious disease diagnostics.

    Science.gov (United States)

    Mulberry, Geoffrey; White, Kevin A; Vaidya, Manjusha; Sugaya, Kiminobu; Kim, Brian N

    2017-01-01

    Diagnosing infectious diseases using quantitative polymerase chain reaction (qPCR) offers a conclusive result in determining the infection, the strain or type of pathogen, and the level of infection. However, due to the high-cost instrumentation involved and the complexity in maintenance, it is rarely used in the field to make a quick turnaround diagnosis. In order to provide a higher level of accessibility than current qPCR devices, a set of 3D manufacturing methods is explored as a possible option to fabricate a low-cost and portable qPCR device. The key advantage of this approach is the ability to upload the digital format of the design files on the internet for wide distribution so that people at any location can simply download and feed into their 3D printers for quick manufacturing. The material and design are carefully selected to minimize the number of custom parts that depend on advanced manufacturing processes which lower accessibility. The presented 3D manufactured qPCR device is tested with 20-μL samples that contain various concentrations of lentivirus, the same type as HIV. A reverse-transcription step is a part of the device's operation, which takes place prior to the qPCR step to reverse transcribe the target RNA from the lentivirus into complementary DNA (cDNA). This is immediately followed by qPCR which quantifies the target sequence molecules in the sample during the PCR amplification process. The entire process of thermal control and time-coordinated fluorescence reading is automated by closed-loop feedback and a microcontroller. The resulting device is portable and battery-operated, with a size of 12 × 7 × 6 cm3 and mass of only 214 g. By uploading and sharing the design files online, the presented low-cost qPCR device may provide easier access to a robust diagnosis protocol for various infectious diseases, such as HIV and malaria.

  19. The Value of 3D Printing Models of Left Atrial Appendage Using Real-Time 3D Transesophageal Echocardiographic Data in Left Atrial Appendage Occlusion: Applications toward an Era of Truly Personalized Medicine.

    Science.gov (United States)

    Liu, Peng; Liu, Rijing; Zhang, Yan; Liu, Yingfeng; Tang, Xiaoming; Cheng, Yanzhen

    The objective of this study was to assess the clinical feasibility of generating 3D printing models of left atrial appendage (LAA) using real-time 3D transesophageal echocardiogram (TEE) data for preoperative reference of LAA occlusion. Percutaneous LAA occlusion can effectively prevent patients with atrial fibrillation from stroke. However, the anatomical structure of LAA is so complicated that adequate information of its structure is essential for successful LAA occlusion. Emerging 3D printing technology has the demonstrated potential to structure more accurately than conventional imaging modalities by creating tangible patient-specific models. Typically, 3D printing data sets are acquired from CT and MRI, which may involve intravenous contrast, sedation, and ionizing radiation. It has been reported that 3D models of LAA were successfully created by the data acquired from CT. However, 3D printing of the LAA using real-time 3D TEE data has not yet been explored. Acquisition of 3D transesophageal echocardiographic data from 8 patients with atrial fibrillation was performed using the Philips EPIQ7 ultrasound system. Raw echocardiographic image data were opened in Philips QLAB and converted to 'Cartesian DICOM' format and imported into Mimics® software to create 3D models of LAA, which were printed using a rubber-like material. The printed 3D models were then used for preoperative reference and procedural simulation in LAA occlusion. We successfully printed LAAs of 8 patients. Each LAA costs approximately CNY 800-1,000 and the total process takes 16-17 h. Seven of the 8 Watchman devices predicted by preprocedural 2D TEE images were of the same sizes as those placed in the real operation. Interestingly, 3D printing models were highly reflective of the shape and size of LAAs, and all device sizes predicted by the 3D printing model were fully consistent with those placed in the real operation. Also, the 3D printed model could predict operating difficulty and the

  20. 3D printing and milling a real-time PCR device for infectious disease diagnostics

    Science.gov (United States)

    Mulberry, Geoffrey; White, Kevin A.; Vaidya, Manjusha; Sugaya, Kiminobu

    2017-01-01

    Diagnosing infectious diseases using quantitative polymerase chain reaction (qPCR) offers a conclusive result in determining the infection, the strain or type of pathogen, and the level of infection. However, due to the high-cost instrumentation involved and the complexity in maintenance, it is rarely used in the field to make a quick turnaround diagnosis. In order to provide a higher level of accessibility than current qPCR devices, a set of 3D manufacturing methods is explored as a possible option to fabricate a low-cost and portable qPCR device. The key advantage of this approach is the ability to upload the digital format of the design files on the internet for wide distribution so that people at any location can simply download and feed into their 3D printers for quick manufacturing. The material and design are carefully selected to minimize the number of custom parts that depend on advanced manufacturing processes which lower accessibility. The presented 3D manufactured qPCR device is tested with 20-μL samples that contain various concentrations of lentivirus, the same type as HIV. A reverse-transcription step is a part of the device’s operation, which takes place prior to the qPCR step to reverse transcribe the target RNA from the lentivirus into complementary DNA (cDNA). This is immediately followed by qPCR which quantifies the target sequence molecules in the sample during the PCR amplification process. The entire process of thermal control and time-coordinated fluorescence reading is automated by closed-loop feedback and a microcontroller. The resulting device is portable and battery-operated, with a size of 12 × 7 × 6 cm3 and mass of only 214 g. By uploading and sharing the design files online, the presented low-cost qPCR device may provide easier access to a robust diagnosis protocol for various infectious diseases, such as HIV and malaria. PMID:28586401

  1. Real-time 3D echo in patient selection for cardiac resynchronization therapy.

    Science.gov (United States)

    Kapetanakis, Stamatis; Bhan, Amit; Murgatroyd, Francis; Kearney, Mark T; Gall, Nicholas; Zhang, Qing; Yu, Cheuk-Man; Monaghan, Mark J

    2011-01-01

    this study investigated the use of 3-dimensional (3D) echo in quantifying left ventricular mechanical dyssynchrony (LVMD), its interhospital agreement, and potential impact on patient selection. assessment of LVMD has been proposed as an improvement on conventional criteria in selecting patients for cardiac resynchronization therapy (CRT). Three-dimensional echo offers a reproducible assessment of left ventricular (LV) structure, function, and LVMD and may be useful in selecting patients for this intervention. we studied 187 patients at 2 institutions. Three-dimensional data from baseline and longest follow-up were quantified for volume, left ventricular ejection fraction (LVEF), and systolic dyssynchrony index (SDI). New York Heart Association (NYHA) functional class was assessed independently. Several outcomes from CRT were considered: 1) reduction in NYHA functional class; 2) 20% relative increase in LVEF; and 3) 15% reduction in LV end-systolic volume. Sixty-two cases were shared between institutions to analyze interhospital agreement. there was excellent interhospital agreement for 3D-derived LV end-diastolic and end- systolic volumes, EF, and SDI (variability: 2.9%, 1%, 7.1%, and 7.6%, respectively). Reduction in NYHA functional class was found in 78.9% of patients. Relative improvement in LVEF of 20% was found in 68% of patients, but significant reduction in LV end-systolic volume was found in only 41.5%. The QRS duration was not predictive of any of the measures of outcome (area under the curve [AUC]: 0.52, 0.58, and 0.57 for NYHA functional class, LVEF, and LV end-systolic volume), whereas SDI was highly predictive of improvement in these parameters (AUC: 0.79, 0.86, and 0.66, respectively). For patients not fulfilling traditional selection criteria (atrial fibrillation, QRS duration <120 ms, or undergoing device upgrade), SDI had similar predictive value. A cutoff of 10.4% for SDI was found to have the highest accuracy for predicting improvement following

  2. Real-time 3D vectorcardiography: an application for didactic use

    International Nuclear Information System (INIS)

    Daniel, G; Lissa, G; Redondo, D Medina; Vasquez, L; Zapata, D

    2007-01-01

    The traditional approach to teach the physiological basis of electrocardiography, based only on textbooks, turns out to be insufficient or confusing for students of biomedical sciences. The addition of laboratory practice to the curriculum enables students to approach theoretical aspects from a hands-on experience, resulting in a more efficient and deeper knowledge of the phenomena of interest. Here, we present the development of a PC-based application meant to facilitate the understanding of cardiac bioelectrical phenomena by visualizing in real time the instantaneous 3D cardiac vector. The system uses 8 standard leads from a 12-channel electrocardiograph. The application interface has pedagogic objectives, and facilitates the observation of cardiac depolarization and repolarization and its temporal relationship with the ECG, making it simpler to interpret

  3. Novel real-time 3D radiological mapping solution for ALARA maximization, D and D assessments and radiological management

    Energy Technology Data Exchange (ETDEWEB)

    Dubart, Philippe; Hautot, Felix [AREVA Group, 1 route de la Noue, Gif sur Yvette (France); Morichi, Massimo; Abou-Khalil, Roger [AREVA Group, Tour AREVA-1, place Jean Millier, Paris (France)

    2015-07-01

    Good management of dismantling and decontamination (D and D) operations and activities is requiring safety, time saving and perfect radiological knowledge of the contaminated environment as well as optimization for personnel dose and minimization of waste volume. In the same time, Fukushima accident has imposed a stretch to the nuclear measurement operational approach requiring in such emergency situation: fast deployment and intervention, quick analysis and fast scenario definition. AREVA, as return of experience from his activities carried out at Fukushima and D and D sites has developed a novel multi-sensor solution as part of his D and D research, approach and method, a system with real-time 3D photo-realistic spatial radiation distribution cartography of contaminated premises. The system may be hand-held or mounted on a mobile device (robot, drone, e.g). In this paper, we will present our current development based on a SLAM technology (Simultaneous Localization And Mapping) and integrated sensors and detectors allowing simultaneous topographic and radiological (dose rate and/or spectroscopy) data acquisitions. This enabling technology permits 3D gamma activity cartography in real-time. (authors)

  4. Novel real-time 3D radiological mapping solution for ALARA maximization, D and D assessments and radiological management

    International Nuclear Information System (INIS)

    Dubart, Philippe; Hautot, Felix; Morichi, Massimo; Abou-Khalil, Roger

    2015-01-01

    Good management of dismantling and decontamination (D and D) operations and activities is requiring safety, time saving and perfect radiological knowledge of the contaminated environment as well as optimization for personnel dose and minimization of waste volume. In the same time, Fukushima accident has imposed a stretch to the nuclear measurement operational approach requiring in such emergency situation: fast deployment and intervention, quick analysis and fast scenario definition. AREVA, as return of experience from his activities carried out at Fukushima and D and D sites has developed a novel multi-sensor solution as part of his D and D research, approach and method, a system with real-time 3D photo-realistic spatial radiation distribution cartography of contaminated premises. The system may be hand-held or mounted on a mobile device (robot, drone, e.g). In this paper, we will present our current development based on a SLAM technology (Simultaneous Localization And Mapping) and integrated sensors and detectors allowing simultaneous topographic and radiological (dose rate and/or spectroscopy) data acquisitions. This enabling technology permits 3D gamma activity cartography in real-time. (authors)

  5. 3D Markov Process for Traffic Flow Prediction in Real-Time

    Directory of Open Access Journals (Sweden)

    Eunjeong Ko

    2016-01-01

    Full Text Available Recently, the correct estimation of traffic flow has begun to be considered an essential component in intelligent transportation systems. In this paper, a new statistical method to predict traffic flows using time series analyses and geometric correlations is proposed. The novelty of the proposed method is two-fold: (1 a 3D heat map is designed to describe the traffic conditions between roads, which can effectively represent the correlations between spatially- and temporally-adjacent traffic states; and (2 the relationship between the adjacent roads on the spatiotemporal domain is represented by cliques in MRF and the clique parameters are obtained by example-based learning. In order to assess the validity of the proposed method, it is tested using data from expressway traffic that are provided by the Korean Expressway Corporation, and the performance of the proposed method is compared with existing approaches. The results demonstrate that the proposed method can predict traffic conditions with an accuracy of 85%, and this accuracy can be improved further.

  6. A nonlinear 3D real-time model for simulation of BWR nuclear power plants

    International Nuclear Information System (INIS)

    Ercan, Y.

    1982-02-01

    A nonlinear transient model for BWR nuclear power plants which consists of a 3D-core (subdivided into a number of superboxes, and with parallel flow and subcooled boiling), a top plenum, steam removal and feed water systems and main coolant recirculation pumps is given. The model describes the local core and global plant transient situation as dependent on both the inherent core dynamics and external control actions, i.e., disturbances such as motions of control rod banks, changes of mass flow rates of coolant, feed water and steam outlet. The case of a pressure-controlled reactor operation is also considered. The model which forms the basis for the digital code GARLIC-B (Er et al. 82) is aimed to be used on an on-site process computer in parallel to the actual reactor process (or even in predictive mode). Thus, special measures had to be taken into account in order to increase the computational speed and reduce the necessary computer storage. This could be achieved by - separating the neutron and power kinetics from the xenon-iodine dynamics, - treating the neutron kinetics and most of the thermodynamics and hydrodynamics in a pseudostationary way, - developing a special coupling coefficient concept to describe the neutron diffusion, calculating the coupling coefficients from a basic neutron kinetics code, - combining coarse mesh elements into superboxes, taking advantage of the symmetry properties of the core and - applying a sparse matrix technique for solving the resulting algebraic power equation system. (orig.) [de

  7. High-precision real-time 3D shape measurement based on a quad-camera system

    Science.gov (United States)

    Tao, Tianyang; Chen, Qian; Feng, Shijie; Hu, Yan; Zhang, Minliang; Zuo, Chao

    2018-01-01

    Phase-shifting profilometry (PSP) based 3D shape measurement is well established in various applications due to its high accuracy, simple implementation, and robustness to environmental illumination and surface texture. In PSP, higher depth resolution generally requires higher fringe density of projected patterns which, in turn, lead to severe phase ambiguities that must be solved with additional information from phase coding and/or geometric constraints. However, in order to guarantee the reliability of phase unwrapping, available techniques are usually accompanied by increased number of patterns, reduced amplitude of fringe, and complicated post-processing algorithms. In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefiting from the position-optimized quad-camera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that in a large measurement volume of 200 mm × 200 mm × 400 mm, the resultant dynamic 3D sensing system can realize real-time 3D reconstruction at 60 frames per second with a depth precision of 50 μm.

  8. Real-time 3-D SAFT-UT system evaluation and validation

    International Nuclear Information System (INIS)

    Doctor, S.R.; Schuster, G.J.; Reid, L.D.; Hall, T.E.

    1996-09-01

    SAFT-UT technology is shown to provide significant enhancements to the inspection of materials used in US nuclear power plants. This report provides guidelines for the implementation of SAFT-UT technology and shows the results from its application. An overview of the development of SAFT-UT is provided so that the reader may become familiar with the technology. Then the basic fundamentals are presented with an extensive list of references. A comprehensive operating procedure, which is used in conjunction with the SAFT-UT field system developed by Pacific Northwest Laboratory (PNL), provides the recipe for both SAFT data acquisition and analysis. The specification for the hardware implementation is provided for the SAFT-UT system along with a description of the subsequent developments and improvements. One development of technical interest is the SAFT real time processor. Performance of the real-time processor is impressive and comparison is made of this dedicated parallel processor to a conventional computer and to the newer high-speed computer architectures designed for image processing. Descriptions of other improvements, including a robotic scanner, are provided. Laboratory parametric and application studies, performed by PNL and not previously reported, are discussed followed by a section on field application work in which SAFT was used during inservice inspections of operating reactors

  9. Real-time 3-D SAFT-UT system evaluation and validation

    Energy Technology Data Exchange (ETDEWEB)

    Doctor, S.R.; Schuster, G.J.; Reid, L.D.; Hall, T.E. [Pacific Northwest National Lab., Richland, WA (United States)

    1996-09-01

    SAFT-UT technology is shown to provide significant enhancements to the inspection of materials used in US nuclear power plants. This report provides guidelines for the implementation of SAFT-UT technology and shows the results from its application. An overview of the development of SAFT-UT is provided so that the reader may become familiar with the technology. Then the basic fundamentals are presented with an extensive list of references. A comprehensive operating procedure, which is used in conjunction with the SAFT-UT field system developed by Pacific Northwest Laboratory (PNL), provides the recipe for both SAFT data acquisition and analysis. The specification for the hardware implementation is provided for the SAFT-UT system along with a description of the subsequent developments and improvements. One development of technical interest is the SAFT real time processor. Performance of the real-time processor is impressive and comparison is made of this dedicated parallel processor to a conventional computer and to the newer high-speed computer architectures designed for image processing. Descriptions of other improvements, including a robotic scanner, are provided. Laboratory parametric and application studies, performed by PNL and not previously reported, are discussed followed by a section on field application work in which SAFT was used during inservice inspections of operating reactors.

  10. 3D tumor localization through real-time volumetric x-ray imaging for lung cancer radiotherapy.

    Science.gov (United States)

    Li, Ruijiang; Lewis, John H; Jia, Xun; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Song, William Y; Jiang, Steve B

    2011-05-01

    To evaluate an algorithm for real-time 3D tumor localization from a single x-ray projection image for lung cancer radiotherapy. Recently, we have developed an algorithm for reconstructing volumetric images and extracting 3D tumor motion information from a single x-ray projection [Li et al., Med. Phys. 37, 2822-2826 (2010)]. We have demonstrated its feasibility using a digital respiratory phantom with regular breathing patterns. In this work, we present a detailed description and a comprehensive evaluation of the improved algorithm. The algorithm was improved by incorporating respiratory motion prediction. The accuracy and efficiency of using this algorithm for 3D tumor localization were then evaluated on (1) a digital respiratory phantom, (2) a physical respiratory phantom, and (3) five lung cancer patients. These evaluation cases include both regular and irregular breathing patterns that are different from the training dataset. For the digital respiratory phantom with regular and irregular breathing, the average 3D tumor localization error is less than 1 mm which does not seem to be affected by amplitude change, period change, or baseline shift. On an NVIDIA Tesla C1060 graphic processing unit (GPU) card, the average computation time for 3D tumor localization from each projection ranges between 0.19 and 0.26 s, for both regular and irregular breathing, which is about a 10% improvement over previously reported results. For the physical respiratory phantom, an average tumor localization error below 1 mm was achieved with an average computation time of 0.13 and 0.16 s on the same graphic processing unit (GPU) card, for regular and irregular breathing, respectively. For the five lung cancer patients, the average tumor localization error is below 2 mm in both the axial and tangential directions. The average computation time on the same GPU card ranges between 0.26 and 0.34 s. Through a comprehensive evaluation of our algorithm, we have established its accuracy in 3D

  11. Pulsed cavitational ultrasound for non-invasive chordal cutting guided by real-time 3D echocardiography.

    Science.gov (United States)

    Villemain, Olivier; Kwiecinski, Wojciech; Bel, Alain; Robin, Justine; Bruneval, Patrick; Arnal, Bastien; Tanter, Mickael; Pernot, Mathieu; Messas, Emmanuel

    2016-10-01

    Basal chordae surgical section has been shown to be effective in reducing ischaemic mitral regurgitation (IMR). Achieving this section by non-invasive mean can considerably decrease the morbidity of this intervention on already infarcted myocardium. We investigated in vitro and in vivo the feasibility and safety of pulsed cavitational focused ultrasound (histotripsy) for non-invasive chordal cutting guided by real-time 3D echocardiography. Experiments were performed on 12 sheep hearts, 5 in vitro on explanted sheep hearts and 7 in vivo on beating sheep hearts. In vitro, the mitral valve (MV) apparatus including basal and marginal chordae was removed and fixed on a holder in a water tank. High-intensity ultrasound pulses were emitted from the therapeutic device (1-MHz focused transducer, pulses of 8 µs duration, peak negative pressure of 17 MPa, repetition frequency of 100 Hz), placed at a distance of 64 mm under 3D echocardiography guidance. In vivo, after sternotomy, the same therapeutic device was applied on the beating heart. We analysed MV coaptation and chordae by real-time 3D echocardiography before and after basal chordal cutting. After sacrifice, the MV apparatus were harvested for anatomical and histological post-mortem explorations to confirm the section of the chordae. In vitro, all chordae were completely cut after a mean procedure duration of 5.5 ± 2.5 min. The procedure duration was found to increase linearly with the chordae diameter. In vivo, the central basal chordae of the anterior leaflet were completely cut. The mean procedure duration was 20 ± 9 min (min = 14, max = 26). The sectioned chordae was visible on echocardiography, and MV coaptation remained normal with no significant mitral regurgitation. Anatomical and histological post-mortem explorations of the hearts confirmed the section of the chordae. Histotripsy guided by 3D echo achieved successfully to cut MV chordae in vitro and in vivo in beating heart. We hope that this technique will

  12. SU-C-201-04: Noise and Temporal Resolution in a Near Real-Time 3D Dosimeter

    Energy Technology Data Exchange (ETDEWEB)

    Rilling, M [Department of physics, engineering physics and optics, Universite Laval, Quebec City, QC (Canada); Centre de recherche sur le cancer, Universite Laval, Quebec City, QC (Canada); Radiation oncology department, CHU de Quebec, Quebec City, QC (Canada); Center for optics, photonics and lasers, Universite Laval, Quebec City, Quebec (Canada); Goulet, M [Radiation oncology department, CHU de Quebec, Quebec City, QC (Canada); Beaulieu, L; Archambault, L [Department of physics, engineering physics and optics, Universite Laval, Quebec City, QC (Canada); Centre de recherche sur le cancer, Universite Laval, Quebec City, QC (Canada); Radiation oncology department, CHU de Quebec, Quebec City, QC (Canada); Thibault, S [Center for optics, photonics and lasers, Universite Laval, Quebec City, Quebec (Canada)

    2016-06-15

    Purpose: To characterize the performance of a real-time three-dimensional scintillation dosimeter in terms of signal-to-noise ratio (SNR) and temporal resolution of 3D dose measurements. This study quantifies its efficiency in measuring low dose levels characteristic of EBRT dynamic treatments, and in reproducing field profiles for varying multileaf collimator (MLC) speeds. Methods: The dosimeter prototype uses a plenoptic camera to acquire continuous images of the light field emitted by a 10×10×10 cm{sup 3} plastic scintillator. Using EPID acquisitions, ray tracing-based iterative tomographic algorithms allow millimeter-sized reconstruction of relative 3D dose distributions. Measurements were taken at 6MV, 400 MU/min with the scintillator centered at the isocenter, first receiving doses from 1.4 to 30.6 cGy. Dynamic measurements were then performed by closing half of the MLCs at speeds of 0.67 to 2.5 cm/s, at 0° and 90° collimator angles. A reference static half-field was obtained for measured profile comparison. Results: The SNR steadily increases as a function of dose and reaches a clinically adequate plateau of 80 at 10 cGy. Below this, the decrease in light collected and increase in pixel noise diminishes the SNR; nonetheless, the EPID acquisitions and the voxel correlation employed in the reconstruction algorithms result in suitable SNR values (>75) even at low doses. For dynamic measurements at varying MLC speeds, central relative dose profiles are characterized by gradients at %D{sub 50} of 8.48 to 22.7 %/mm. These values converge towards the 32.8 %/mm-gradient measured for the static reference field profile, but are limited by the dosimeter’s current acquisition rate of 1Hz. Conclusion: This study emphasizes the efficiency of the 3D dose distribution reconstructions, while identifying limits of the current prototype’s temporal resolution in terms of dynamic EBRT parameters. This work paves the way for providing an optimized, second

  13. SU-C-201-04: Noise and Temporal Resolution in a Near Real-Time 3D Dosimeter

    International Nuclear Information System (INIS)

    Rilling, M; Goulet, M; Beaulieu, L; Archambault, L; Thibault, S

    2016-01-01

    Purpose: To characterize the performance of a real-time three-dimensional scintillation dosimeter in terms of signal-to-noise ratio (SNR) and temporal resolution of 3D dose measurements. This study quantifies its efficiency in measuring low dose levels characteristic of EBRT dynamic treatments, and in reproducing field profiles for varying multileaf collimator (MLC) speeds. Methods: The dosimeter prototype uses a plenoptic camera to acquire continuous images of the light field emitted by a 10×10×10 cm"3 plastic scintillator. Using EPID acquisitions, ray tracing-based iterative tomographic algorithms allow millimeter-sized reconstruction of relative 3D dose distributions. Measurements were taken at 6MV, 400 MU/min with the scintillator centered at the isocenter, first receiving doses from 1.4 to 30.6 cGy. Dynamic measurements were then performed by closing half of the MLCs at speeds of 0.67 to 2.5 cm/s, at 0° and 90° collimator angles. A reference static half-field was obtained for measured profile comparison. Results: The SNR steadily increases as a function of dose and reaches a clinically adequate plateau of 80 at 10 cGy. Below this, the decrease in light collected and increase in pixel noise diminishes the SNR; nonetheless, the EPID acquisitions and the voxel correlation employed in the reconstruction algorithms result in suitable SNR values (>75) even at low doses. For dynamic measurements at varying MLC speeds, central relative dose profiles are characterized by gradients at %D_5_0 of 8.48 to 22.7 %/mm. These values converge towards the 32.8 %/mm-gradient measured for the static reference field profile, but are limited by the dosimeter’s current acquisition rate of 1Hz. Conclusion: This study emphasizes the efficiency of the 3D dose distribution reconstructions, while identifying limits of the current prototype’s temporal resolution in terms of dynamic EBRT parameters. This work paves the way for providing an optimized, second-generational real-time 3D

  14. Ice crystallization in porous building materials: assessing damage using real-time 3D monitoring

    Science.gov (United States)

    Deprez, Maxim; De Kock, Tim; De Schutter, Geert; Cnudde, Veerle

    2017-04-01

    Frost action is one of the main causes of deterioration of porous building materials in regions at middle to high latitudes. Damage will occur when the internal stresses due to ice formation become larger than the strength of the material. Hence, the sensitivity of the material to frost damage is partly defined by the structure of the solid body. On the other hand, the size, shape and interconnection of pores manages the water distribution in the building material and, therefore, the characteristics of the pore space control potential to form ice crystals (Ruedrich et al., 2011). In order to assess the damage to building materials by ice crystallization, lot of effort was put into identifying the mechanisms behind the stress build up. First of all, volumetric expansion of 9% (Hirschwald, 1908) during the transition of water to ice should be mentioned. Under natural circumstances, however, water saturation degrees within natural rocks or concrete cannot reach a damaging value. Therefore, linear growth pressure (Scherer, 1999), as well as several mechanisms triggered by water redistribution during freezing (Powers and Helmuth, 1953; Everett, 1961) are more likely responsible for damage due to freezing. Nevertheless, these theories are based on indirect observations and models and, thus, direct evidence that reveals the exact damage mechanism under certain conditions is still lacking. To obtain this proof, in-situ information needs to be acquired while a freezing process is performed. X-ray computed tomography has proven to be of great value in material research. Recent advances at the Ghent University Centre for Tomography (UGCT) have already allowed to dynamically 3D image crack growth in natural rock during freeze-thaw cycles (De Kock et al., 2015). A great potential to evaluate the different stress build-up mechanisms can be found in this imaging technique consequently. It is required to cover a range of materials with different petrophysical properties to achieve

  15. Helicopter Flight Test of a Compact, Real-Time 3-D Flash Lidar for Imaging Hazardous Terrain During Planetary Landing

    Science.gov (United States)

    Roback, VIncent E.; Amzajerdian, Farzin; Brewster, Paul F.; Barnes, Bruce W.; Kempton, Kevin S.; Reisse, Robert A.; Bulyshev, Alexander E.

    2013-01-01

    A second generation, compact, real-time, air-cooled 3-D imaging Flash Lidar sensor system, developed from a number of cutting-edge components from industry and NASA, is lab characterized and helicopter flight tested under the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project. The ALHAT project is seeking to develop a guidance, navigation, and control (GN&C) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The Flash Lidar incorporates a 3-D imaging video camera based on Indium-Gallium-Arsenide Avalanche Photo Diode and novel micro-electronic technology for a 128 x 128 pixel array operating at a video rate of 20 Hz, a high pulse-energy 1.06 µm Neodymium-doped: Yttrium Aluminum Garnet (Nd:YAG) laser, a remote laser safety termination system, high performance transmitter and receiver optics with one and five degrees field-of-view (FOV), enhanced onboard thermal control, as well as a compact and self-contained suite of support electronics housed in a single box and built around a PC-104 architecture to enable autonomous operations. The Flash Lidar was developed and then characterized at two NASA-Langley Research Center (LaRC) outdoor laser test range facilities both statically and dynamically, integrated with other ALHAT GN&C subsystems from partner organizations, and installed onto a Bell UH-1H Iroquois "Huey" helicopter at LaRC. The integrated system was flight tested at the NASA-Kennedy Space Center (KSC) on simulated lunar approach to a custom hazard field consisting of rocks, craters, hazardous slopes, and safe-sites near the Shuttle Landing Facility runway starting at slant ranges of 750 m. In order to evaluate different methods of achieving hazard detection, the lidar, in conjunction with the ALHAT hazard detection and GN&C system, operates in both a narrow 1deg FOV raster

  16. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    Science.gov (United States)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time

  17. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    Science.gov (United States)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have

  18. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    International Nuclear Information System (INIS)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced

  19. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 (United States); Cheung, Yam [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas, 75390 and Department of Radiation Oncology, University of Maryland, College Park, Maryland 20742 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced

  20. Computational hologram synthesis and representation on spatial light modulators for real-time 3D holographic imaging

    International Nuclear Information System (INIS)

    Reichelt, Stephan; Leister, Norbert

    2013-01-01

    In dynamic computer-generated holography that utilizes spatial light modulators, both hologram synthesis and hologram representation are essential in terms of fast computation and high reconstruction quality. For hologram synthesis, i.e. the computation step, Fresnel transform based or point-source based raytracing methods can be applied. In the encoding step, the complex wave-field has to be optimally represented by the SLM with its given modulation capability. For proper hologram reconstruction that implies a simultaneous and independent amplitude and phase modulation of the input wave-field by the SLM. In this paper, we discuss full complex hologram representation methods on SLMs by considering inherent SLM parameter such as modulation type and bit depth on their reconstruction performance such as diffraction efficiency and SNR. We review the three implementation schemes of Burckhardt amplitude-only representation, phase-only macro-pixel representation, and two-phase interference representation. Besides the optical performance we address their hardware complexity and required computational load. Finally, we experimentally demonstrate holographic reconstructions of different representation schemes as obtained by functional prototypes utilizing SeeReal's viewing-window holographic display technology. The proposed hardware implementations enable a fast encoding of complex-valued hologram data and thus will pave the way for commercial real-time holographic 3D imaging in the near future.

  1. A Real-Time Magnetoencephalography Brain-Computer Interface Using Interactive 3D Visualization and the Hadoop Ecosystem

    Directory of Open Access Journals (Sweden)

    Wilbert A. McClay

    2015-09-01

    Full Text Available Ecumenically, the fastest growing segment of Big Data is human biology-related data and the annual data creation is on the order of zetabytes. The implications are global across industries, of which the treatment of brain related illnesses and trauma could see the most significant and immediate effects. The next generation of health care IT and sensory devices are acquiring and storing massive amounts of patient related data. An innovative Brain-Computer Interface (BCI for interactive 3D visualization is presented utilizing the Hadoop Ecosystem for data analysis and storage. The BCI is an implementation of Bayesian factor analysis algorithms that can distinguish distinct thought actions using magneto encephalographic (MEG brain signals. We have collected data on five subjects yielding 90% positive performance in MEG mid- and post-movement activity. We describe a driver that substitutes the actions of the BCI as mouse button presses for real-time use in visual simulations. This process has been added into a flight visualization demonstration. By thinking left or right, the user experiences the aircraft turning in the chosen direction. The driver components of the BCI can be compiled into any software and substitute a user’s intent for specific keyboard strikes or mouse button presses. The BCI’s data analytics OPEN ACCESS Brain. Sci. 2015, 5 420 of a subject’s MEG brainwaves and flight visualization performance are stored and analyzed using the Hadoop Ecosystem as a quick retrieval data warehouse.

  2. Real-time registration of 3D to 2D ultrasound images for image-guided prostate biopsy.

    Science.gov (United States)

    Gillies, Derek J; Gardi, Lori; De Silva, Tharindu; Zhao, Shuang-Ren; Fenster, Aaron

    2017-09-01

    During image-guided prostate biopsy, needles are targeted at tissues that are suspicious of cancer to obtain specimen for histological examination. Unfortunately, patient motion causes targeting errors when using an MR-transrectal ultrasound (TRUS) fusion approach to augment the conventional biopsy procedure. This study aims to develop an automatic motion correction algorithm approaching the frame rate of an ultrasound system to be used in fusion-based prostate biopsy systems. Two modes of operation have been investigated for the clinical implementation of the algorithm: motion compensation using a single user initiated correction performed prior to biopsy, and real-time continuous motion compensation performed automatically as a background process. Retrospective 2D and 3D TRUS patient images acquired prior to biopsy gun firing were registered using an intensity-based algorithm utilizing normalized cross-correlation and Powell's method for optimization. 2D and 3D images were downsampled and cropped to estimate the optimal amount of image information that would perform registrations quickly and accurately. The optimal search order during optimization was also analyzed to avoid local optima in the search space. Error in the algorithm was computed using target registration errors (TREs) from manually identified homologous fiducials in a clinical patient dataset. The algorithm was evaluated for real-time performance using the two different modes of clinical implementations by way of user initiated and continuous motion compensation methods on a tissue mimicking prostate phantom. After implementation in a TRUS-guided system with an image downsampling factor of 4, the proposed approach resulted in a mean ± std TRE and computation time of 1.6 ± 0.6 mm and 57 ± 20 ms respectively. The user initiated mode performed registrations with in-plane, out-of-plane, and roll motions computation times of 108 ± 38 ms, 60 ± 23 ms, and 89 ± 27 ms, respectively, and corresponding

  3. Oblique Multi-Camera Systems - Orientation and Dense Matching Issues

    Science.gov (United States)

    Rupnik, E.; Nex, F.; Remondino, F.

    2014-03-01

    The use of oblique imagery has become a standard for many civil and mapping applications, thanks to the development of airborne digital multi-camera systems, as proposed by many companies (Blomoblique, IGI, Leica, Midas, Pictometry, Vexcel/Microsoft, VisionMap, etc.). The indisputable virtue of oblique photography lies in its simplicity of interpretation and understanding for inexperienced users allowing their use of oblique images in very different applications, such as building detection and reconstruction, building structural damage classification, road land updating and administration services, etc. The paper reports an overview of the actual oblique commercial systems and presents a workflow for the automated orientation and dense matching of large image blocks. Perspectives, potentialities, pitfalls and suggestions for achieving satisfactory results are given. Tests performed on two datasets acquired with two multi-camera systems over urban areas are also reported.

  4. The multi-camera optical surveillance system (MOS)

    International Nuclear Information System (INIS)

    Otto, P.; Wagner, H.; Richter, B.; Gaertner, K.J.; Laszlo, G.; Neumann, G.

    1991-01-01

    The transition from film camera to video surveillance systems, in particular the implementation of high capacity multi-camera video systems, results in a large increase in the amount of recorded scenes. Consequently, there is a substantial increase in the manpower requirements for review. Moreover, modern microprocessor controlled equipment facilitates the collection of additional data associated with each scene. Both the scene and the annotated information have to be evaluated by the inspector. The design of video surveillance systems for safeguards necessarily has to account for both appropriate recording and reviewing techniques. An aspect of principal importance is that the video information is stored on tape. Under the German Support Programme to the Agency a technical concept has been developed which aims at optimizing the capabilities of a multi-camera optical surveillance (MOS) system including the reviewing technique. This concept is presented in the following paper including a discussion of reviewing and reliability

  5. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    OpenAIRE

    Orts-Escolano, Sergio; Garcia-Rodriguez, Jose; Morell, Vicente; Cazorla, Miguel; Azorin-Lopez, Jorge; García-Chamizo, Juan Manuel

    2014-01-01

    In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mob...

  6. WE-AB-BRB-00: Session in Memory of Robert J. Shalek: High Resolution Dosimetry from 2D to 3D to Real-Time 3D

    International Nuclear Information System (INIS)

    2016-01-01

    Despite widespread IMRT treatments at modern radiation therapy clinics, precise dosimetric commissioning of an IMRT system remains a challenge. In the most recent report from the Radiological Physics Center (RPC), nearly 20% of institutions failed an end-to-end test with an anthropomorphic head and neck phantom, a test that has rather lenient dose difference and distance-to-agreement criteria of 7% and 4 mm. The RPC report provides strong evidence that IMRT implementation is prone to error and that improved quality assurance tools are required. At the heart of radiation therapy dosimetry is the multidimensional dosimeter. However, due to the limited availability of water-equivalent dosimetry materials, research and development in this important field is challenging. In this session, we will review a few dosimeter developments that are either in the laboratory phase or in the pre-commercialization phase. 1) Radiochromic plastic. Novel formulations exhibit light absorbing optical contrast with very little scatter, enabling faster, broad beam optical CT design. 2) Storage phosphor. After irradiation, the dosimetry panels will be read out using a dedicated 2D scanning apparatus in a non-invasive, electro-optic manner and immediately restored for further use. 3) Liquid scintillator. Scintillators convert the energy from x-rays and proton beams into visible light, which can be recorded with a scientific camera (CCD or CMOS) from multiple angles. The 3D shape of the dose distribution can then be reconstructed. 4) Cherenkov emission imaging. Gated intensified imaging allows video-rate passive detection of Cherenkov emission during radiation therapy with the room lights on. Learning Objectives: To understand the physics of a variety of dosimetry techniques based upon optical imaging To investigate the strategies to overcome respective challenges and limitations To explore novel ideas of dosimeter design Supported in part by NIH Grants R01CA148853, R01CA182450, R01CA109558

  7. WE-AB-BRB-00: Session in Memory of Robert J. Shalek: High Resolution Dosimetry from 2D to 3D to Real-Time 3D

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2016-06-15

    Despite widespread IMRT treatments at modern radiation therapy clinics, precise dosimetric commissioning of an IMRT system remains a challenge. In the most recent report from the Radiological Physics Center (RPC), nearly 20% of institutions failed an end-to-end test with an anthropomorphic head and neck phantom, a test that has rather lenient dose difference and distance-to-agreement criteria of 7% and 4 mm. The RPC report provides strong evidence that IMRT implementation is prone to error and that improved quality assurance tools are required. At the heart of radiation therapy dosimetry is the multidimensional dosimeter. However, due to the limited availability of water-equivalent dosimetry materials, research and development in this important field is challenging. In this session, we will review a few dosimeter developments that are either in the laboratory phase or in the pre-commercialization phase. 1) Radiochromic plastic. Novel formulations exhibit light absorbing optical contrast with very little scatter, enabling faster, broad beam optical CT design. 2) Storage phosphor. After irradiation, the dosimetry panels will be read out using a dedicated 2D scanning apparatus in a non-invasive, electro-optic manner and immediately restored for further use. 3) Liquid scintillator. Scintillators convert the energy from x-rays and proton beams into visible light, which can be recorded with a scientific camera (CCD or CMOS) from multiple angles. The 3D shape of the dose distribution can then be reconstructed. 4) Cherenkov emission imaging. Gated intensified imaging allows video-rate passive detection of Cherenkov emission during radiation therapy with the room lights on. Learning Objectives: To understand the physics of a variety of dosimetry techniques based upon optical imaging To investigate the strategies to overcome respective challenges and limitations To explore novel ideas of dosimeter design Supported in part by NIH Grants R01CA148853, R01CA182450, R01CA109558

  8. Open 3D Projects

    Directory of Open Access Journals (Sweden)

    Felician ALECU

    2010-01-01

    Full Text Available Many professionals and 3D artists consider Blender as being the best open source solution for 3D computer graphics. The main features are related to modeling, rendering, shading, imaging, compositing, animation, physics and particles and realtime 3D/game creation.

  9. Augmented reality during robot-assisted laparoscopic partial nephrectomy: toward real-time 3D-CT to stereoscopic video registration.

    Science.gov (United States)

    Su, Li-Ming; Vagvolgyi, Balazs P; Agarwal, Rahul; Reiley, Carol E; Taylor, Russell H; Hager, Gregory D

    2009-04-01

    To investigate a markerless tracking system for real-time stereo-endoscopic visualization of preoperative computed tomographic imaging as an augmented display during robot-assisted laparoscopic partial nephrectomy. Stereoscopic video segments of a patient undergoing robot-assisted laparoscopic partial nephrectomy for tumor and another for a partial staghorn renal calculus were processed to evaluate the performance of a three-dimensional (3D)-to-3D registration algorithm. After both cases, we registered a segment of the video recording to the corresponding preoperative 3D-computed tomography image. After calibrating the camera and overlay, 3D-to-3D registration was created between the model and the surgical recording using a modified iterative closest point technique. Image-based tracking technology tracked selected fixed points on the kidney surface to augment the image-to-model registration. Our investigation has demonstrated that we can identify and track the kidney surface in real time when applied to intraoperative video recordings and overlay the 3D models of the kidney, tumor (or stone), and collecting system semitransparently. Using a basic computer research platform, we achieved an update rate of 10 Hz and an overlay latency of 4 frames. The accuracy of the 3D registration was 1 mm. Augmented reality overlay of reconstructed 3D-computed tomography images onto real-time stereo video footage is possible using iterative closest point and image-based surface tracking technology that does not use external navigation tracking systems or preplaced surface markers. Additional studies are needed to assess the precision and to achieve fully automated registration and display for intraoperative use.

  10. The value of applying nitroglycerin in 3D coronary MR angiography with real-time navigation technique

    International Nuclear Information System (INIS)

    Hackenbroch, M.; Meyer, C.; Schmiedel, A.; Hofer, U.; Flacke, S.; Kovacs, A.; Schild, H.; Sommer, T.; Tiemann, K.; Skowasch, D.

    2004-01-01

    Purpose: Nitroglycerin administration results in dilation of epicardial coronary vessels and in an increase in coronary blood flow, and has been suggested to improve MR coronary angiography. This study evaluates systematically whether administration of nitroglycerin improves the visualization of coronary arteries and, as a result, the detection of coronary artery stenosis during free breathing 3D coronary MR angiography. Materials and Methods: Coronary MR angiography was performed in 44 patients with suspected coronary artery disease at a 1.5 Tesla System (Intera, Philips Medical Systems) (a) with and (b) without continuous administration of intravenous nitroglycerin at a dose rate of 2.5 mg/h, using an ECG gated gradient echo sequence with real-time navigator correction (turbo field echo, in-plane resolution 0.70 x 0.79 mm 2 , acquisition window 80 ms). Equivalent segments of the coronary arteries in the sequences with and without nitroglycerin were evaluated for visualized vessel length and diameter, qualitative assessment of visualization using a four point grading scale and detection of stenoses >50%. Catheter coronary angiography was used as a gold-standard. Results: No significant differences were found between scans with and without nitroglycerin as to average length of the contiguously visualized vessel length (p>0.05) and diameter (p>0.05). There was also no significant difference in the coronary MR angiography with and without nitroglycerin in the average qualitative assessment score of the visualization of LM, proximal LAD, proximal CX, and proximal and distal RCA (2.1±0.8 and 2.2±0.7; p> 0.05). Sensitivity (77% [17/22] vs. 82% [18/22] p>0.05) and specificity (72% [13/18] vs. 72% [13/18] p>0.05) for the detection of coronary artery stenosis also did not differ significantly between scans with and without intravenous administration of nitroglycerin. Conclusion: Administration of nitroglycerin does not improve visualization of the coronary arteries and

  11. Use of real-time three-dimensional transesophageal echocardiography in type A aortic dissections: Advantages of 3D TEE illustrated in three cases

    Directory of Open Access Journals (Sweden)

    Cindy J Wang

    2015-01-01

    Full Text Available Stanford type A aortic dissections often present to the hospital requiring emergent surgical intervention. Initial diagnosis is usually made by computed tomography; however transesophageal echocardiography (TEE can further characterize aortic dissections with specific advantages: It may be performed on an unstable patient, it can be used intra-operatively, and it has the ability to provide continuous real-time information. Three-dimensional (3D TEE has become more accessible over recent years allowing it to serve as an additional tool in the operating room. We present a case series of three patients presenting with type A aortic dissections and the advantages of intra-operative 3D TEE to diagnose the extent of dissection in each case. Prior case reports have demonstrated the use of 3D TEE in type A aortic dissections to characterize the extent of dissection and involvement of neighboring structures. In our three cases described, 3D TEE provided additional understanding of spatial relationships between the dissection flap and neighboring structures such as the aortic valve and coronary orifices that were not fully appreciated with two-dimensional TEE, which affected surgical decisions in the operating room. This case series demonstrates the utility and benefit of real-time 3D TEE during intra-operative management of a type A aortic dissection.

  12. Three-dimensional (3D) real-time conformal brachytherapy - a novel solution for prostate cancer treatment Part I. Rationale and method

    International Nuclear Information System (INIS)

    Fijalkowski, M.; Bialas, B.; Maciejewski, B.; Bystrzycka, J.; Slosarek, K.

    2005-01-01

    Recently, the system for conformal real-time high-dose-rate brachytherapy has been developed and dedicated in general for the treatment of prostate cancer. The aim of this paper is to present the 3D-conformal real-time brachytherapy technique introduced to clinical practice at the Institute of Oncology in Gliwice. Equipment and technique of 3D-conformal real time brachytherapy (3D-CBRT) is presented in detail and compared with conventional high-dose-rate brachytherapy. Step-by-step procedures of treatment planning are described, including own modifications. The 3D-CBRT offers the following advantages: (1) on-line continuous visualization of the prostate and acquisition of the series of NS images during the entire procedure of planning and treatment; (2) high precision of definition and contouring the target volume and the healthy organs at risk (urethra, rectum, bladder) based on 3D transrectal continuous ultrasound images; (3) interactive on-line dose optimization with real-time corrections of the dose-volume histograms (DVHs) till optimal dose distribution is achieved; (4) possibility to overcome internal prostate motion and set-up inaccuracies by stable positioning of the prostate with needles fixed to the template; (5) significant shortening of overall treatment time; (6) cost reduction - the treatment can be provided as an outpatient procedure. The 3D- real time CBRT can be advertised as an ideal conformal boost dose technique integrated or interdigitated with pelvic conformal external beam radiotherapy or as a monotherapy for prostate cancer. (author)

  13. A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS

    Directory of Open Access Journals (Sweden)

    M. Hassanein

    2016-06-01

    Full Text Available In the last few years, multi-cameras and LIDAR systems draw the attention of the mapping community. They have been deployed on different mobile mapping platforms. The different uses of these platforms, especially the UAVs, offered new applications and developments which require fast and accurate results. The successful calibration of such systems is a key factor to achieve accurate results and for the successful processing of the system measurements especially with the different types of measurements provided by the LIDAR and the cameras. The system calibration aims to estimate the geometric relationships between the different system components. A number of applications require the systems be ready for operation in a short time especially for disasters monitoring applications. Also, many of the present system calibration techniques are constrained with the need of special arrangements in labs for the calibration procedures. In this paper, a new technique for calibration of integrated LIDAR and multi-cameras systems is presented. The new proposed technique offers a calibration solution that overcomes the need for special labs for standard calibration procedures. In the proposed technique, 3D reconstruction of automatically detected and matched image points is used to generate a sparse images-driven point cloud then, a registration between the LIDAR generated 3D point cloud and the images-driven 3D point takes place to estimate the geometric relationships between the cameras and the LIDAR.. In the presented technique a simple 3D artificial target is used to simplify the lab requirements for the calibration procedure. The used target is composed of three intersected plates. The choice of such target geometry was to ensure enough conditions for the convergence of registration between the constructed 3D point clouds from the two systems. The achieved results of the proposed approach prove its ability to provide an adequate and fully automated

  14. Stability Analysis for a Multi-Camera Photogrammetric System

    Directory of Open Access Journals (Sweden)

    Ayman Habib

    2014-08-01

    Full Text Available Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  15. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    Directory of Open Access Journals (Sweden)

    Sergio Orts-Escolano

    2014-04-01

    Full Text Available In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units. It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

  16. Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms.

    Science.gov (United States)

    Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro

    2012-09-10

    We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.

  17. Application of Real-Time 3D Navigation System in CT-Guided Percutaneous Interventional Procedures: A Feasibility Study

    Directory of Open Access Journals (Sweden)

    Priya Bhattacharji

    2017-01-01

    Full Text Available Introduction. To evaluate the accuracy of a quantitative 3D navigation system for CT-guided interventional procedures in a two-part study. Materials and Methods. Twenty-two procedures were performed in abdominal and thoracic phantoms. Accuracies of the 3D anatomy map registration and navigation were evaluated. Time used for the navigated procedures was recorded. In the IRB approved clinical evaluation, 21 patients scheduled for CT-guided thoracic and hepatic biopsy and ablations were recruited. CT-guided procedures were performed without following the 3D navigation display. Accuracy of navigation as well as workflow fitness of the system was evaluated. Results. In phantoms, the average 3D anatomy map registration error was 1.79 mm. The average navigated needle placement accuracy for one-pass and two-pass procedures, respectively, was 2.0±0.7 mm and 2.8±1.1 mm in the liver and 2.7±1.7 mm and 3.0±1.4 mm in the lung. The average accuracy of the 3D navigation system in human subjects was 4.6 mm ± 3.1 for all procedures. The system fits the existing workflow of CT-guided interventions with minimum impact. Conclusion. A 3D navigation system can be performed along the existing workflow and has the potential to navigate precision needle placement in CT-guided interventional procedures.

  18. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    International Nuclear Information System (INIS)

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom; Lee, Jae Young

    2015-01-01

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  19. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy.

    Science.gov (United States)

    Li, Ruijiang; Jia, Xun; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B

    2010-06-01

    To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Given a set of volumetric images of a patient at N breathing phases as the training data, deformable image registration was performed between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, new DVFs can be generated, which, when applied on the reference image, lead to new volumetric images. A volumetric image can then be reconstructed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. The algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. The training data were generated using a realistic and dynamic mathematical phantom with ten breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. The average relative image intensity error of the reconstructed volumetric images is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 +/- 0.5 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 s (range: 0.17 and 0.35 s). The authors have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.

  20. Distributed Sensing and Processing for Multi-Camera Networks

    Science.gov (United States)

    Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.

    Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.

  1. Underwater video enhancement using multi-camera super-resolution

    Science.gov (United States)

    Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.

    2017-12-01

    Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.

  2. Development of real-time motion capture system for 3D on-line games linked with virtual character

    Science.gov (United States)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  3. Real-time monitoring of quorum sensing in 3D-printed bacterial aggregates using scanning electrochemical microscopy.

    Science.gov (United States)

    Connell, Jodi L; Kim, Jiyeon; Shear, Jason B; Bard, Allen J; Whiteley, Marvin

    2014-12-23

    Microbes frequently live in nature as small, densely packed aggregates containing ∼10(1)-10(5) cells. These aggregates not only display distinct phenotypes, including resistance to antibiotics, but also, serve as building blocks for larger biofilm communities. Aggregates within these larger communities display nonrandom spatial organization, and recent evidence indicates that this spatial organization is critical for fitness. Studying single aggregates as well as spatially organized aggregates remains challenging because of the technical difficulties associated with manipulating small populations. Micro-3D printing is a lithographic technique capable of creating aggregates in situ by printing protein-based walls around individual cells or small populations. This 3D-printing strategy can organize bacteria in complex arrangements to investigate how spatial and environmental parameters influence social behaviors. Here, we combined micro-3D printing and scanning electrochemical microscopy (SECM) to probe quorum sensing (QS)-mediated communication in the bacterium Pseudomonas aeruginosa. Our results reveal that QS-dependent behaviors are observed within aggregates as small as 500 cells; however, aggregates larger than 2,000 bacteria are required to stimulate QS in neighboring aggregates positioned 8 μm away. These studies provide a powerful system to analyze the impact of spatial organization and aggregate size on microbial behaviors.

  4. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation.

    Science.gov (United States)

    Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud

    2017-01-01

    In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Visual simultaneous localization and mapping (VSLAM) methods applied to indoor 3D topographical and radiological mapping in real-time

    International Nuclear Information System (INIS)

    Hautot, F.; Dubart, P.; Chagneau, B.; Bacri, C.O.; Abou-Khalil, R.

    2017-01-01

    New developments in the field of robotics and computer vision enable to merge sensors to allow fast real-time localization of radiological measurements in the space/volume with near real-time radioactive sources identification and characterization. These capabilities lead nuclear investigations to a more efficient way for operators' dosimetry evaluation, intervention scenarios and risks mitigation and simulations, such as accidents in unknown potentially contaminated areas or during dismantling operations. This paper will present new progresses in merging RGB-D camera based on SLAM (Simultaneous Localization and Mapping) systems and nuclear measurement in motion methods in order to detect, locate, and evaluate the activity of radioactive sources in 3-dimensions

  6. Visual Simultaneous Localization And Mapping (VSLAM) methods applied to indoor 3D topographical and radiological mapping in real-time

    Science.gov (United States)

    Hautot, Felix; Dubart, Philippe; Bacri, Charles-Olivier; Chagneau, Benjamin; Abou-Khalil, Roger

    2017-09-01

    New developments in the field of robotics and computer vision enables to merge sensors to allow fast realtime localization of radiological measurements in the space/volume with near-real time radioactive sources identification and characterization. These capabilities lead nuclear investigations to a more efficient way for operators' dosimetry evaluation, intervention scenarii and risks mitigation and simulations, such as accidents in unknown potentially contaminated areas or during dismantling operations

  7. Induced tauopathy in a novel 3D-culture model mediates neurodegenerative processes: a real-time study on biochips.

    Directory of Open Access Journals (Sweden)

    Diana Seidel

    Full Text Available Tauopathies including Alzheimer's disease represent one of the major health problems of aging population worldwide. Therefore, a better understanding of tau-dependent pathologies and consequently, tau-related intervention strategies is highly demanded. In recent years, several tau-focused therapies have been proposed with the aim to stop disease progression. However, to develop efficient active pharmaceutical ingredients for the broad treatment of Alzheimer's disease patients, further improvements are necessary for understanding the detailed neurodegenerative processes as well as the mechanism and side effects of potential active pharmaceutical ingredients (API in the neuronal system. In this context, there is a lack of suitable complex in vitro cell culture models recapitulating major aspects of taupathological degenerative processes in sufficient time and reproducible manner.Herewith, we describe a novel 3D SH-SY5Y cell-based, tauopathy model that shows advanced characteristics of matured neurons in comparison to monolayer cultures without the need of artificial differentiation promoting agents. Moreover, the recombinant expression of a novel highly pathologic fourfold mutated human tau variant lead to a fast and emphasized degeneration of neuritic processes. The neurodegenerative effects could be analyzed in real time and with high sensitivity using our unique microcavity array-based impedance spectroscopy measurement system. We were able to quantify a time- and concentration-dependent relative impedance decrease when Alzheimer's disease-like tau pathology was induced in the neuronal 3D cell culture model. In combination with the collected optical information, the degenerative processes within each 3D-culture could be monitored and analyzed. More strikingly, tau-specific regenerative effects caused by tau-focused active pharmaceutical ingredients could be quantitatively monitored by impedance spectroscopy.Bringing together our novel complex 3

  8. A 3D simulation look-up library for real-time airborne gamma-ray spectroscopy

    Science.gov (United States)

    Kulisek, Jonathan A.; Wittman, Richard S.; Miller, Erin A.; Kernan, Warnick J.; McCall, Jonathon D.; McConn, Ron J.; Schweppe, John E.; Seifert, Carolyn E.; Stave, Sean C.; Stewart, Trevor N.

    2018-01-01

    A three-dimensional look-up library consisting of simulated gamma-ray spectra was developed to leverage, in real-time, the abundance of data provided by a helicopter-mounted gamma-ray detection system consisting of 92 CsI-based radiation sensors and exhibiting a highly angular-dependent response. We have demonstrated how this library can be used to help effectively estimate the terrestrial gamma-ray background, develop simulated flight scenarios, and to localize radiological sources. Source localization accuracy was significantly improved, particularly for weak sources, by estimating the entire gamma-ray spectra while accounting for scattering in the air, and especially off the ground.

  9. Novel, high-definition 3-D endoscopy system with real-time compression communication system to aid diagnoses and treatment between hospitals in Thailand.

    Science.gov (United States)

    Uemura, Munenori; Kenmotsu, Hajime; Tomikawa, Morimasa; Kumashiro, Ryuichi; Yamashita, Makoto; Ikeda, Testuo; Yamashita, Hiromasa; Chiba, Toshio; Hayashi, Koichi; Sakae, Eiji; Eguchi, Mitsuo; Fukuyo, Tsuneo; Chittmittrapap, Soottiporn; Navicharern, Patpong; Chotiwan, Pornarong; Pattana-Arum, Jirawat; Hashizume, Makoto

    2015-05-01

    Traditionally, laparoscopy has been based on 2-D imaging, which represents a considerable challenge. As a result, 3-D visualization technology has been proposed as a way to better facilitate laparoscopy. We compared the latest 3-D systems with high-end 2-D monitors to validate the usefulness of new systems for endoscopic diagnoses and treatment in Thailand. We compared the abilities of our high-definition 3-D endoscopy system with real-time compression communication system with a conventional high-definition (2-D) endoscopy system by asking health-care staff to complete tasks. Participants answered questionnaires and whether procedures were easier using our system or the 2-D endoscopy system. Participants were significantly faster at suture insertion with our system (34.44 ± 15.91 s) than with the 2-D system (52.56 ± 37.51 s) (P < 0.01). Most surgeons thought that the 3-D system was good in terms of contrast, brightness, perception of the anteroposterior position of the needle, needle grasping, inserting the needle as planned, and needle adjustment during laparoscopic surgery. Several surgeons highlighted the usefulness of exposing and clipping the bile duct and gallbladder artery, as well as dissection from the liver bed during laparoscopic surgery. In an image-transfer experiment with RePure-L®, participants at Rajavithi Hospital could obtain reconstructed 3-D images that were non-inferior to conventional images from Chulalongkorn University Hospital (10 km away). These data suggest that our newly developed system could be of considerable benefit to the health-care system in Thailand. Transmission of moving endoscopic images from a center of excellence to a rural hospital could help in the diagnosis and treatment of various diseases. © 2015 Japan Society for Endoscopic Surgery, Asia Endosurgery Task Force and Wiley Publishing Asia Pty Ltd.

  10. Real-time 3D imaging methods using 2D phased arrays based on synthetic focusing techniques.

    Science.gov (United States)

    Kim, Jung-Jun; Song, Tai-Kyong

    2008-07-01

    A fast 3D ultrasound imaging technique using a 2D phased array transducer based on the synthetic focusing method for nondestructive testing or medical imaging is proposed. In the proposed method, each column of a 2D array is fired successively to produce transverse fan beams focused at a fixed depth along a given longitudinal direction and the resulting pulse echoes are received at all elements of a 2D array used. After firing all column arrays, a frame of high-resolution image along a given longitudinal direction is obtained with dynamic focusing employed in the longitudinal direction on receive and in the transverse direction on both transmit and receive. The volume rate of the proposed method can be increased much higher than that of the conventional 2D array imaging by employing an efficient sparse array technique. A simple modification to the proposed method can further increase the volume scan rate significantly. The proposed methods are verified through computer simulations.

  11. Real-time deformation of human soft tissues: A radial basis meshless 3D model based on Marquardt's algorithm.

    Science.gov (United States)

    Zhou, Jianyong; Luo, Zu; Li, Chunquan; Deng, Mi

    2018-01-01

    When the meshless method is used to establish the mathematical-mechanical model of human soft tissues, it is necessary to define the space occupied by human tissues as the problem domain and the boundary of the domain as the surface of those tissues. Nodes should be distributed in both the problem domain and on the boundaries. Under external force, the displacement of the node is computed by the meshless method to represent the deformation of biological soft tissues. However, computation by the meshless method consumes too much time, which will affect the simulation of real-time deformation of human tissues in virtual surgery. In this article, the Marquardt's Algorithm is proposed to fit the nodal displacement at the problem domain's boundary and obtain the relationship between surface deformation and force. When different external forces are applied, the deformation of soft tissues can be quickly obtained based on this relationship. The analysis and discussion show that the improved model equations with Marquardt's Algorithm not only can simulate the deformation in real-time but also preserve the authenticity of the deformation model's physical properties. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. Real-time 3D transesophageal echocardiography-guided closure of a complicated patent ductus arteriosus in a dog.

    Science.gov (United States)

    Doocy, K R; Nelson, D A; Saunders, A B

    2017-06-01

    Advanced imaging modalities are becoming more widely available in veterinary cardiology, including the use of transesophageal echocardiography (TEE) during occlusion of patent ductus arteriosus (PDA) in dogs. The dog in this report had a complex history of attempted ligation and a large PDA that initially precluded device placement thereby limiting the options for PDA closure. Following a second thoracotomy and partial ligation, the morphology of the PDA was altered and device occlusion was an option. Angiographic assessment of the PDA was limited by the presence of hemoclips, and the direction of ductal flow related to the change in anatomy following ligature placement. Intra-operative TEE, in particular real-time three-dimensional imaging, was pivotal for assessing the PDA morphology, monitoring during the procedure, selecting the device size, and confirming device placement. The TEE images increased operator confidence that the size and location of the device were appropriate before release despite the unusual position. This report highlights the benefit of intra-operative TEE, in particular real-time three-dimensional imaging, for successful PDA occlusion in a complicated case. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    International Nuclear Information System (INIS)

    Min Yugang; Santhanam, Anand; Ruddy, Bari H; Neelakkantan, Harini; Meeks, Sanford L; Kupelian, Patrick A

    2010-01-01

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  14. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    Energy Technology Data Exchange (ETDEWEB)

    Min Yugang; Santhanam, Anand; Ruddy, Bari H [University of Central Florida, FL (United States); Neelakkantan, Harini; Meeks, Sanford L [M D Anderson Cancer Center Orlando, FL (United States); Kupelian, Patrick A, E-mail: anand.santhanam@orlandohealth.co [Department of Radiation Oncology, University of California, Los Angeles, CA (United States)

    2010-09-07

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  15. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion.

    Science.gov (United States)

    Min, Yugang; Santhanam, Anand; Neelakkantan, Harini; Ruddy, Bari H; Meeks, Sanford L; Kupelian, Patrick A

    2010-09-07

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  16. A real-time monitoring/emergency response workstation using a 3-D numerical model initialized with SODAR

    International Nuclear Information System (INIS)

    Lawver, B.S.; Sullivan, T.J.; Baskett, R.L.

    1993-01-01

    Many workstation based emergency response dispersion modeling systems provide simple Gaussian models driven by single meteorological tower inputs to estimate the downwind consequences from accidental spills or stack releases. Complex meteorological or terrain settings demand more sophisticated resolution of the three-dimensional structure of the atmosphere to reliably calculate plume dispersion. Mountain valleys and sea breeze flows are two common examples of such settings. To address these complexities, we have implemented the three-dimensional-diagnostic MATHEW mass-adjusted wind field and ADPIC particle-in-cell dispersion models on a workstation for use in real-time emergency response modeling. Both MATHEW and ADPIC have shown their utility in a variety of complex settings over the last 15 years within the Department of Energy's Atmospheric Release Advisory Capability project

  17. VR-Planets : a 3D immersive application for real-time flythrough images of planetary surfaces

    Science.gov (United States)

    Civet, François; Le Mouélic, Stéphane

    2015-04-01

    During the last two decades, a fleet of planetary probes has acquired several hundred gigabytes of images of planetary surfaces. Mars has been particularly well covered thanks to the Mars Global Surveyor, Mars Express and Mars Reconnaissance Orbiter spacecrafts. HRSC, CTX, HiRISE instruments allowed the computation of Digital Elevation Models with a resolution from hundreds of meters up to 1 meter per pixel, and corresponding orthoimages with a resolution from few hundred of meters up to 25 centimeters per pixel. The integration of such huge data sets into a system allowing user-friendly manipulation either for scientific investigation or for public outreach can represent a real challenge. We are investigating how innovative tools can be used to freely fly over reconstructed landscapes in real time, using technologies derived from the game industry and virtual reality. We have developed an application based on a game engine, using planetary data, to immerse users in real martian landscapes. The user can freely navigate in each scene at full spatial resolution using a game controller. The actual rendering is compatible with several visualization devices such as 3D active screen, virtual reality headsets (Oculus Rift), and android devices.

  18. Novel System for Real-Time Integration of 3-D Echocardiography and Fluoroscopy for Image-Guided Cardiac Interventions: Preclinical Validation and Clinical Feasibility Evaluation

    Science.gov (United States)

    Housden, R. James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C. Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D.; Razavi, Reza; Rhode, Kawal S.

    2014-01-01

    Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures. PMID:27170872

  19. Contrast-enhanced MR angiography of the carotid artery using 3D time-resolved imaging of contrast kinetics. Comparison with real-time fluoroscopic triggered 3D-elliptical centric view ordering

    International Nuclear Information System (INIS)

    Naganawa, Shinji; Koshikawa, Tokiko; Fukatsu, Hiroshi; Sakurai, Yasuo; Ishiguchi, Tsuneo; Ishigaki, Takeo; Ichinose, Nobuyasu

    2001-01-01

    The purpose of this study was to evaluate contrast-enhanced MR angiography using the 3D time-resolved imaging of contrast kinetics technique (3D-TRICKS) by direct comparison with the fluoroscopic triggered 3D-elliptical centric view ordering (3D-ELLIP) technique. 3D-TRICKS and 3D-ELLIP were directly compared on a 1.5-Tesla MR unit using the same spatial resolution and matrix. In 3D-TRICKS, the central part of the k-space is updated more frequently than the peripheral part of the k-space, which is divided in the slice-encoding direction. The carotid arteries were imaged using 3D-TRICKS and 3D-ELLIP sequentially in 14 patients. Temporal resolution was 12 sec for 3D-ELLIP and 6 sec for 3D-TRICKS. The signal-to-noise ratio (S/N) of the common carotid artery was measured, and the quality of MIP images was then scored in terms of venous overlap and blurring of vessel contours. No significant difference in mean S/N was seen between the two methods. Significant venous overlap was not seen in any of the patients examined. Moderate blurring of vessel contours was noted on 3D-TRICKS in five patients and on 3D-ELLIP in four patients. Blurring in the slice-encoding direction was slightly more pronounced in 3D-TRICKS. However, qualitative analysis scores showed no significant differences. When the spatial resolution of the two methods was identical, the performance of 3D-TRICKS was found to be comparable in static visualization of the carotid arteries with 3D-ELLIP, although blurring in the slice-encoding direction was slightly more pronounced in 3D-TRICKS. 3D-TRICKS is a more robust technique than 3D-ELLIP, because 3D-ELLIP requires operator-dependent fluoroscopic triggering. Furthermore, 3D-TRICKS can achieve higher temporal resolution. For the spatial resolution employed in this study, 3D-TRICKS may be the method of choice. (author)

  20. A Quality Evaluation of Single and Multiple Camera Calibration Approaches for an Indoor Multi Camera Tracking System

    Directory of Open Access Journals (Sweden)

    M. Adduci

    2014-06-01

    Full Text Available Human detection and tracking has been a prominent research area for several scientists around the globe. State of the art algorithms have been implemented, refined and accelerated to significantly improve the detection rate and eliminate false positives. While 2D approaches are well investigated, 3D human detection and tracking is still an unexplored research field. In both 2D/3D cases, introducing a multi camera system could vastly expand the accuracy and confidence of the tracking process. Within this work, a quality evaluation is performed on a multi RGB-D camera indoor tracking system for examining how camera calibration and pose can affect the quality of human tracks in the scene, independently from the detection and tracking approach used. After performing a calibration step on every Kinect sensor, state of the art single camera pose estimators were evaluated for checking how good the quality of the poses is estimated using planar objects such as an ordinate chessboard. With this information, a bundle block adjustment and ICP were performed for verifying the accuracy of the single pose estimators in a multi camera configuration system. Results have shown that single camera estimators provide high accuracy results of less than half a pixel forcing the bundle to converge after very few iterations. In relation to ICP, relative information between cloud pairs is more or less preserved giving a low score of fitting between concatenated pairs. Finally, sensor calibration proved to be an essential step for achieving maximum accuracy in the generated point clouds, and therefore in the accuracy of the produced 3D trajectories, from each sensor.

  1. Mitigating Space Weather Impacts on the Power Grid in Real-Time: Applying 3-D EarthScope Magnetotelluric Data to Forecasting Reactive Power Loss in Power Transformers

    Science.gov (United States)

    Schultz, A.; Bonner, L. R., IV

    2017-12-01

    Current efforts to assess risk to the power grid from geomagnetic disturbances (GMDs) that result in geomagnetically induced currents (GICs) seek to identify potential "hotspots," based on statistical models of GMD storm scenarios and power distribution grounding models that assume that the electrical conductivity of the Earth's crust and mantle varies only with depth. The NSF-supported EarthScope Magnetotelluric (MT) Program operated by Oregon State University has mapped 3-D ground electrical conductivity structure across more than half of the continental US. MT data, the naturally occurring time variations in the Earth's vector electric and magnetic fields at ground level, are used to determine the MT impedance tensor for each site (the ratio of horizontal vector electric and magnetic fields at ground level expressed as a complex-valued frequency domain quantity). The impedance provides information on the 3-D electrical conductivity structure of the Earth's crust and mantle. We demonstrate that use of 3-D ground conductivity information significantly improves the fidelity of GIC predictions over existing 1-D approaches. We project real-time magnetic field data streams from US Geological Survey magnetic observatories into a set of linear filters that employ the impedance data and that generate estimates of ground level electric fields at the locations of MT stations. The resulting ground electric fields are projected to and integrated along the path of power transmission lines. This serves as inputs to power flow models that represent the power transmission grid, yielding a time-varying set of quasi-real-time estimates of reactive power loss at the power transformers that are critical infrastructure for power distribution. We demonstrate that peak reactive power loss and hence peak risk for transformer damage from GICs does not necessarily occur during peak GMD storm times, but rather depends on the time-evolution of the polarization of the GMD's inducing fields

  2. Left-ventricle segmentation in real-time 3D echocardiography using a hybrid active shape model and optimal graph search approach

    Science.gov (United States)

    Zhang, Honghai; Abiose, Ademola K.; Campbell, Dwayne N.; Sonka, Milan; Martins, James B.; Wahle, Andreas

    2010-03-01

    Quantitative analysis of the left ventricular shape and motion patterns associated with left ventricular mechanical dyssynchrony (LVMD) is essential for diagnosis and treatment planning in congestive heart failure. Real-time 3D echocardiography (RT3DE) used for LVMD analysis is frequently limited by heavy speckle noise or partially incomplete data, thus a segmentation method utilizing learned global shape knowledge is beneficial. In this study, the endocardial surface of the left ventricle (LV) is segmented using a hybrid approach combining active shape model (ASM) with optimal graph search. The latter is used to achieve landmark refinement in the ASM framework. Optimal graph search translates the 3D segmentation into the detection of a minimum-cost closed set in a graph and can produce a globally optimal result. Various information-gradient, intensity distributions, and regional-property terms-are used to define the costs for the graph search. The developed method was tested on 44 RT3DE datasets acquired from 26 LVMD patients. The segmentation accuracy was assessed by surface positioning error and volume overlap measured for the whole LV as well as 16 standard LV regions. The segmentation produced very good results that were not achievable using ASM or graph search alone.

  3. Evaluation of accuracy about 2D vs 3D real-time position management system based on couch rotation when non-coplanar respiratory gated radiation therapy

    International Nuclear Information System (INIS)

    Kwon, Kyung Tae; Kim, Jung Soo; Sim, Hyun Sun; Min, Jung Whan; Son, Soon Yong; Han, Dong Kyoon

    2016-01-01

    Because of non-coplanar therapy with couch rotation in respiratory gated radiation therapy, the recognition of marker movement due to the change in the distance between the infrared camera and the marker due to the rotation of the couch is called RPM (Real-time The purpose of this paper is to evaluate the accuracy of motion reflections (baseline changes) of 2D gating configuration (two dot marker block) and 3D gating configuration (six dot marker block). The motion was measured by varying the couch angle in the clockwise and counterclockwise directions by 10° in the 2D gating configuration. In the 3D gating configuration, the couch angle was changed by 10° in the clockwise direction and compared with the baseline at the reference 0°. The reference amplitude was 1.173 to 1.165, the couch angle at 20° was 1.132, and the couch angle at 1.0° was 1.083. At 350° counterclockwise, the reference amplitude was 1.168 to 1.157, the couch angle at 340° was 1.124, and the couch angle at 330° was 1.079. In this study, the phantom is used to quantitatively evaluate the value of the amplitude according to couch change

  4. Evaluation of accuracy about 2D vs 3D real-time position management system based on couch rotation when non-coplanar respiratory gated radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Kyung Tae; Kim, Jung Soo [Dongnam Health University, Suwon (Korea, Republic of); Sim, Hyun Sun [College of Health Sciences, Korea University, Seoul (Korea, Republic of); Min, Jung Whan [Shingu University College, Sungnam (Korea, Republic of); Son, Soon Yong [Wonkwang Health Science University, Iksan (Korea, Republic of); Han, Dong Kyoon [College of Health Sciences, EulJi University, Daejeon (Korea, Republic of)

    2016-12-15

    Because of non-coplanar therapy with couch rotation in respiratory gated radiation therapy, the recognition of marker movement due to the change in the distance between the infrared camera and the marker due to the rotation of the couch is called RPM (Real-time The purpose of this paper is to evaluate the accuracy of motion reflections (baseline changes) of 2D gating configuration (two dot marker block) and 3D gating configuration (six dot marker block). The motion was measured by varying the couch angle in the clockwise and counterclockwise directions by 10° in the 2D gating configuration. In the 3D gating configuration, the couch angle was changed by 10° in the clockwise direction and compared with the baseline at the reference 0°. The reference amplitude was 1.173 to 1.165, the couch angle at 20° was 1.132, and the couch angle at 1.0° was 1.083. At 350° counterclockwise, the reference amplitude was 1.168 to 1.157, the couch angle at 340° was 1.124, and the couch angle at 330° was 1.079. In this study, the phantom is used to quantitatively evaluate the value of the amplitude according to couch change.

  5. NASA's "Eyes On The Solar System:" A Real-time, 3D-Interactive Tool to Teach the Wonder of Planetary Science

    Science.gov (United States)

    Hussey, K.

    2014-12-01

    NASA's Jet Propulsion Laboratory is using video game technology to immerse students, the general public and mission personnel in our solar system and beyond. "Eyes on the Solar System," a cross-platform, real-time, 3D-interactive application that can run on-line or as a stand-alone "video game," is of particular interest to educators looking for inviting tools to capture students interest in a format they like and understand. (eyes.nasa.gov). It gives users an extraordinary view of our solar system by virtually transporting them across space and time to make first-person observations of spacecraft, planetary bodies and NASA/ESA missions in action. Key scientific results illustrated with video presentations, supporting imagery and web links are imbedded contextually into the solar system. Educators who want an interactive, game-based approach to engage students in learning Planetary Science will see how "Eyes" can be effectively used to teach its principles to grades 3 through 14.The presentation will include a detailed demonstration of the software along with a description/demonstration of how this technology is being adapted for education. There will also be a preview of coming attractions. This work is being conducted by the Visualization Technology Applications and Development Group at NASA's Jet Propulsion Laboratory, the same team responsible for "Eyes on the Earth 3D," and "Eyes on Exoplanets," which can be viewed at eyes.nasa.gov/earth and eyes.nasa.gov/exoplanets.

  6. Real-time high resolution 3D imaging of the lyme disease spirochete adhering to and escaping from the vasculature of a living host.

    Directory of Open Access Journals (Sweden)

    Tara J Moriarty

    2008-06-01

    Full Text Available Pathogenic spirochetes are bacteria that cause a number of emerging and re-emerging diseases worldwide, including syphilis, leptospirosis, relapsing fever, and Lyme borreliosis. They navigate efficiently through dense extracellular matrix and cross the blood-brain barrier by unknown mechanisms. Due to their slender morphology, spirochetes are difficult to visualize by standard light microscopy, impeding studies of their behavior in situ. We engineered a fluorescent infectious strain of Borrelia burgdorferi, the Lyme disease pathogen, which expressed green fluorescent protein (GFP. Real-time 3D and 4D quantitative analysis of fluorescent spirochete dissemination from the microvasculature of living mice at high resolution revealed that dissemination was a multi-stage process that included transient tethering-type associations, short-term dragging interactions, and stationary adhesion. Stationary adhesions and extravasating spirochetes were most commonly observed at endothelial junctions, and translational motility of spirochetes appeared to play an integral role in transendothelial migration. To our knowledge, this is the first report of high resolution 3D and 4D visualization of dissemination of a bacterial pathogen in a living mammalian host, and provides the first direct insight into spirochete dissemination in vivo.

  7. "Eyes On The Solar System": A Real-time, 3D-Interactive Tool to Teach the Wonder of Planetary Science

    Science.gov (United States)

    Hussey, K. J.

    2011-10-01

    NASA's Jet Propulsion Laboratory is using videogame technology to immerse students, the general public and mission personnel in our solar system and beyond. "Eyes on the Solar System," a cross-platform, real-time, 3D-interactive application that runs inside a Web browser, was released worldwide late last year (solarsystem.nasa.gov/eyes). It gives users an extraordinary view of our solar system by virtually transporting them across space and time to make first-person observations of spacecraft and NASA/ESA missions in action. Key scientific results illustrated with video presentations and supporting imagery are imbedded contextually into the solar system. The presentation will include a detailed demonstration of the software along with a description/discussion of how this technology can be adapted for education and public outreach, as well as a preview of coming attractions. This work is being conducted by the Visualization Technology Applications and Development Group at NASA's Jet Propulsion Laboratory, the same team responsible for "Eyes on the Earth 3D," which can be viewed at climate.nasa.gov/Eyes.html.

  8. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology.

    Science.gov (United States)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-02-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.

  9. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    Energy Technology Data Exchange (ETDEWEB)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph [Medical University of Vienna (Austria). Center of Medical Physics and Biomedical Engineering] [and others

    2012-07-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference X-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 x 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. (orig.)

  10. 3D wide field-of-view Gabor-domain optical coherence microscopy advancing real-time in-vivo imaging and metrology

    Science.gov (United States)

    Canavesi, Cristina; Cogliati, Andrea; Hayes, Adam; Tankam, Patrice; Santhanam, Anand; Rolland, Jannick P.

    2017-02-01

    Real-time volumetric high-definition wide-field-of-view in-vivo cellular imaging requires micron-scale resolution in 3D. Compactness of the handheld device and distortion-free images with cellular resolution are also critically required for onsite use in clinical applications. By integrating a custom liquid lens-based microscope and a dual-axis MEMS scanner in a compact handheld probe, Gabor-domain optical coherence microscopy (GD-OCM) breaks the lateral resolution limit of optical coherence tomography through depth, overcoming the tradeoff between numerical aperture and depth of focus, enabling advances in biotechnology. Furthermore, distortion-free imaging with no post-processing is achieved with a compact, lightweight handheld MEMS scanner that obtained a 12-fold reduction in volume and 17-fold reduction in weight over a previous dual-mirror galvanometer-based scanner. Approaching the holy grail of medical imaging - noninvasive real-time imaging with histologic resolution - GD-OCM demonstrates invariant resolution of 2 μm throughout a volume of 1 x 1 x 0.6 mm3, acquired and visualized in less than 2 minutes with parallel processing on graphics processing units. Results on the metrology of manufactured materials and imaging of human tissue with GD-OCM are presented.

  11. Synchronized 2D/3D optical mapping for interactive exploration and real-time visualization of multi-function neurological images.

    Science.gov (United States)

    Zhang, Qi; Alexander, Murray; Ryner, Lawrence

    2013-01-01

    Efficient software with the ability to display multiple neurological image datasets simultaneously with full real-time interactivity is critical for brain disease diagnosis and image-guided planning. In this paper, we describe the creation and function of a new comprehensive software platform that integrates novel algorithms and functions for multiple medical image visualization, processing, and manipulation. We implement an opacity-adjustment algorithm to build 2D lookup tables for multiple slice image display and fusion, which achieves a better visual result than those of using VTK-based methods. We also develop a new real-time 2D and 3D data synchronization scheme for multi-function MR volume and slice image optical mapping and rendering simultaneously through using the same adjustment operation. All these methodologies are integrated into our software framework to provide users with an efficient tool for flexibly, intuitively, and rapidly exploring and analyzing the functional and anatomical MR neurological data. Finally, we validate our new techniques and software platform with visual analysis and task-specific user studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. An Embedded Real-Time Red Peach Detection System Based on an OV7670 Camera, ARM Cortex-M4 Processor and 3D Look-Up Tables

    Directory of Open Access Journals (Sweden)

    Marcel Tresanchez

    2012-10-01

    Full Text Available This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6 processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  13. An embedded real-time red peach detection system based on an OV7670 camera, ARM cortex-M4 processor and 3D look-up tables.

    Science.gov (United States)

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-10-22

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  14. Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy

    Science.gov (United States)

    Furtado, H.; Steiner, E.; Stock, M.; Georg, D.; Birkfellner, W.

    2014-03-01

    Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190+/-35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.

  15. Precise and real-time measurement of 3D tumor motion in lung due to breathing and heartbeat, measured during radiotherapy

    International Nuclear Information System (INIS)

    Seppenwoolde, Yvette; Shirato, Hiroki; Kitamura, Kei; Shimizu, Shinichi; Herk, Marcel van; Lebesque, Joos V.; Miyasaka, Kazuo

    2002-01-01

    Purpose: In this work, three-dimensional (3D) motion of lung tumors during radiotherapy in real time was investigated. Understanding the behavior of tumor motion in lung tissue to model tumor movement is necessary for accurate (gated or breath-hold) radiotherapy or CT scanning. Methods: Twenty patients were included in this study. Before treatment, a 2-mm gold marker was implanted in or near the tumor. A real-time tumor tracking system using two fluoroscopy image processor units was installed in the treatment room. The 3D position of the implanted gold marker was determined by using real-time pattern recognition and a calibrated projection geometry. The linear accelerator was triggered to irradiate the tumor only when the gold marker was located within a certain volume. The system provided the coordinates of the gold marker during beam-on and beam-off time in all directions simultaneously, at a sample rate of 30 images per second. The recorded tumor motion was analyzed in terms of the amplitude and curvature of the tumor motion in three directions, the differences in breathing level during treatment, hysteresis (the difference between the inhalation and exhalation trajectory of the tumor), and the amplitude of tumor motion induced by cardiac motion. Results: The average amplitude of the tumor motion was greatest (12±2 mm [SD]) in the cranial-caudal direction for tumors situated in the lower lobes and not attached to rigid structures such as the chest wall or vertebrae. For the lateral and anterior-posterior directions, tumor motion was small both for upper- and lower-lobe tumors (2±1 mm). The time-averaged tumor position was closer to the exhale position, because the tumor spent more time in the exhalation than in the inhalation phase. The tumor motion was modeled as a sinusoidal movement with varying asymmetry. The tumor position in the exhale phase was more stable than the tumor position in the inhale phase during individual treatment fields. However, in many

  16. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging

    International Nuclear Information System (INIS)

    Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L

    2008-01-01

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from ∼0.5 mm for the normal adult breathing pattern to ∼1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real-time

  17. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging.

    Science.gov (United States)

    Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L

    2008-12-21

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from approximately 0.5 mm for the normal adult breathing pattern to approximately 1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general

  18. Real-time 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy.

    Science.gov (United States)

    Furtado, Hugo; Steiner, Elisabeth; Stock, Markus; Georg, Dietmar; Birkfellner, Wolfgang

    2013-10-01

    Intra-fractional respiratory motion during radiotherapy leads to a larger planning target volume (PTV). Real-time tumor motion tracking by two-dimensional (2D)/3D registration using on-board kilo-voltage (kV) imaging can allow for a reduction of the PTV though motion along the imaging beam axis cannot be resolved using only one projection image. We present a retrospective patient study investigating the impact of paired portal mega-voltage (MV) and kV images on registration accuracy. Material and methods. We used data from 10 patients suffering from non-small cell lung cancer (NSCLC) undergoing stereotactic body radiation therapy (SBRT) lung treatment. For each patient we acquired a planning computed tomography (CT) and sequences of kV and MV images during treatment. We compared the accuracy of motion tracking in six degrees-of-freedom (DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. Results. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 2.9 mm to 1.5 mm and the motion along AP was successfully extracted. Mean registration time was 188 ms. Conclusion. Our evaluation shows that using kV-MV image pairs leads to improved motion extraction in six DOF and is suitable for real-time tumor motion tracking with a conventional LINAC.

  19. 4D ANIMATION RECONSTRUCTION FROM MULTI-CAMERA COORDINATES TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    J. P. Jhan

    2016-06-01

    Full Text Available Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australis© coded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  20. 4-D ICE: A 2-D Array Transducer With Integrated ASIC in a 10-Fr Catheter for Real-Time 3-D Intracardiac Echocardiography.

    Science.gov (United States)

    Wildes, Douglas; Lee, Warren; Haider, Bruno; Cogan, Scott; Sundaresan, Krishnakumar; Mills, David M; Yetter, Christopher; Hart, Patrick H; Haun, Christopher R; Concepcion, Mikael; Kirkhorn, Johan; Bitoun, Marc

    2016-12-01

    We developed a 2.5 ×6.6 mm 2 2 -D array transducer with integrated transmit/receive application-specific integrated circuit (ASIC) for real-time 3-D intracardiac echocardiography (4-D ICE) applications. The ASIC and transducer design were optimized so that the high-voltage transmit, low-voltage time-gain control and preamp, subaperture beamformer, and digital control circuits for each transducer element all fit within the 0.019-mm 2 area of the element. The transducer assembly was deployed in a 10-Fr (3.3-mm diameter) catheter, integrated with a GE Vivid E9 ultrasound imaging system, and evaluated in three preclinical studies. The 2-D image quality and imaging modes were comparable to commercial 2-D ICE catheters. The 4-D field of view was at least 90 ° ×60 ° ×8 cm and could be imaged at 30 vol/s, sufficient to visualize cardiac anatomy and other diagnostic and therapy catheters. 4-D ICE should significantly reduce X-ray fluoroscopy use and dose during electrophysiology ablation procedures. 4-D ICE may be able to replace transesophageal echocardiography (TEE), and the associated risks and costs of general anesthesia, for guidance of some structural heart procedures.

  1. SUPRA: open-source software-defined ultrasound processing for real-time applications : A 2D and 3D pipeline from beamforming to B-mode.

    Science.gov (United States)

    Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph

    2018-06-01

    Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.

  2. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network

    Science.gov (United States)

    Bukhari, W.; Hong, S.-M.

    2016-03-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN+ , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN+ prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN+ implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN+ . The experimental results show that the EKF-GPRN+ algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN+ algorithm can further reduce the prediction error by employing the gating function, albeit

  3. Real-time prediction and gating of respiratory motion in 3D space using extended Kalman filters and Gaussian process regression network

    International Nuclear Information System (INIS)

    Bukhari, W; Hong, S-M

    2016-01-01

    The prediction as well as the gating of respiratory motion have received much attention over the last two decades for reducing the targeting error of the radiation treatment beam due to respiratory motion. In this article, we present a real-time algorithm for predicting respiratory motion in 3D space and realizing a gating function without pre-specifying a particular phase of the patient’s breathing cycle. The algorithm, named EKF-GPRN +  , first employs an extended Kalman filter (EKF) independently along each coordinate to predict the respiratory motion and then uses a Gaussian process regression network (GPRN) to correct the prediction error of the EKF in 3D space. The GPRN is a nonparametric Bayesian algorithm for modeling input-dependent correlations between the output variables in multi-output regression. Inference in GPRN is intractable and we employ variational inference with mean field approximation to compute an approximate predictive mean and predictive covariance matrix. The approximate predictive mean is used to correct the prediction error of the EKF. The trace of the approximate predictive covariance matrix is utilized to capture the uncertainty in EKF-GPRN + prediction error and systematically identify breathing points with a higher probability of large prediction error in advance. This identification enables us to pause the treatment beam over such instances. EKF-GPRN + implements a gating function by using simple calculations based on the trace of the predictive covariance matrix. Extensive numerical experiments are performed based on a large database of 304 respiratory motion traces to evaluate EKF-GPRN +  . The experimental results show that the EKF-GPRN + algorithm reduces the patient-wise prediction error to 38%, 40% and 40% in root-mean-square, compared to no prediction, at lookahead lengths of 192 ms, 384 ms and 576 ms, respectively. The EKF-GPRN + algorithm can further reduce the prediction error by employing the gating function

  4. User-assisted visual search and tracking across distributed multi-camera networks

    Science.gov (United States)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  5. Euratom multi-camera optical surveillance system (EMOSS) - a digital solution

    International Nuclear Information System (INIS)

    Otto, P.; Wagner, H.G.; Taillade, B.; Pryck, C. de.

    1991-01-01

    In 1989 the Euratom Safeguards Directorate of the Commission of the European Communities drew up functional and draft technical specifications for a new fully digital multi-camera optical surveillance system. HYMATOM of Castries designed and built a prototype unit for laboratory and field tests. This paper reports and system design and first test results

  6. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Yu Lu

    2016-04-01

    Full Text Available A new compact large field of view (FOV multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second.

  7. Single-frame 3D human pose recovery from multiple views

    NARCIS (Netherlands)

    Hofmann, M.; Gavrila, D.M.

    2009-01-01

    We present a system for the estimation of unconstrained 3D human upper body pose from multi-camera single-frame views. Pose recovery starts with a shape detection stage where candidate poses are generated based on hierarchical exemplar matching in the individual camera views. The hierarchy used in

  8. Real-Time Extraction of Course Track Networks in Confined Waters as Decision Support for Vessel Navigation in 3-D Nautical Chart

    National Research Council Canada - National Science Library

    Porathe, Thomas

    2006-01-01

    In an information design project at Malardalen University in Sweden a computer based 3-D nautical chart system is designed based on human factors principles of more intuitive navigation in high speeds...

  9. Accelerating volumetric cine MRI (VC-MRI) using undersampling for real-time 3D target localization/tracking in radiation therapy: a feasibility study

    Science.gov (United States)

    Harris, Wendy; Yin, Fang-Fang; Wang, Chunhao; Zhang, You; Cai, Jing; Ren, Lei

    2018-01-01

    Purpose. To accelerate volumetric cine MRI (VC-MRI) using undersampled 2D-cine MRI to provide real-time 3D guidance for gating/target tracking in radiotherapy. Methods. 4D-MRI is acquired during patient simulation. One phase of the prior 4D-MRI is selected as the prior images, designated as MRIprior. The on-board VC-MRI at each time-step is considered a deformation of the MRIprior. The deformation field map is represented as a linear combination of the motion components extracted by principal component analysis from the prior 4D-MRI. The weighting coefficients of the motion components are solved by matching the corresponding 2D-slice of the VC-MRI with the on-board undersampled 2D-cine MRI acquired. Undersampled Cartesian and radial k-space acquisition strategies were investigated. The effects of k-space sampling percentage (SP) and distribution, tumor sizes and noise on the VC-MRI estimation were studied. The VC-MRI estimation was evaluated using XCAT simulation of lung cancer patients and data from liver cancer patients. Volume percent difference (VPD) and Center of Mass Shift (COMS) of the tumor volumes and tumor tracking errors were calculated. Results. For XCAT, VPD/COMS were 11.93  ±  2.37%/0.90  ±  0.27 mm and 11.53  ±  1.47%/0.85  ±  0.20 mm among all scenarios with Cartesian sampling (SP  =  10%) and radial sampling (21 spokes, SP  =  5.2%), respectively. When tumor size decreased, higher sampling rate achieved more accurate VC-MRI than lower sampling rate. VC-MRI was robust against noise levels up to SNR  =  20. For patient data, the tumor tracking errors in superior-inferior, anterior-posterior and lateral (LAT) directions were 0.46  ±  0.20 mm, 0.56  ±  0.17 mm and 0.23  ±  0.16 mm, respectively, for Cartesian-based sampling with SP  =  20% and 0.60  ±  0.19 mm, 0.56  ±  0.22 mm and 0.42  ±  0.15 mm, respectively, for

  10. OBLIQUE MULTI-CAMERA SYSTEMS – ORIENTATION AND DENSE MATCHING ISSUES

    Directory of Open Access Journals (Sweden)

    E. Rupnik

    2014-03-01

    Full Text Available The use of oblique imagery has become a standard for many civil and mapping applications, thanks to the development of airborne digital multi-camera systems, as proposed by many companies (Blomoblique, IGI, Leica, Midas, Pictometry, Vexcel/Microsoft, VisionMap, etc.. The indisputable virtue of oblique photography lies in its simplicity of interpretation and understanding for inexperienced users allowing their use of oblique images in very different applications, such as building detection and reconstruction, building structural damage classification, road land updating and administration services, etc. The paper reports an overview of the actual oblique commercial systems and presents a workflow for the automated orientation and dense matching of large image blocks. Perspectives, potentialities, pitfalls and suggestions for achieving satisfactory results are given. Tests performed on two datasets acquired with two multi-camera systems over urban areas are also reported.

  11. Multi Camera Multi Object Tracking using Block Search over Epipolar Geometry

    Directory of Open Access Journals (Sweden)

    Saman Sargolzaei

    2000-01-01

    Full Text Available We present strategy for multi-objects tracking in multi camera environment for the surveillance and security application where tracking multitude subjects are of utmost importance in a crowded scene. Our technique assumes partially overlapped multi-camera setup where cameras share common view from different angle to assess positions and activities of subjects under suspicion. To establish spatial correspondence between camera views we employ an epipolar geometry technique. We propose an overlapped block search method to find the interested pattern (target in new frames. Color pattern update scheme has been considered to further optimize the efficiency of the object tracking when object pattern changes due to object motion in the field of views of the cameras. Evaluation of our approach is presented with the results on PETS2007 dataset..

  12. Performance analysis for automated gait extraction and recognition in multi-camera surveillance

    OpenAIRE

    Goffredo, Michela; Bouchrika, Imed; Carter, John N.; Nixon, Mark S.

    2010-01-01

    Many studies have confirmed that gait analysis can be used as a new biometrics. In this research, gait analysis is deployed for people identification in multi-camera surveillance scenarios. We present a new method for viewpoint independent markerless gait analysis that does not require camera calibration and works with a wide range of walking directions. These properties make the proposed method particularly suitable for gait identification in real surveillance scenarios where people and thei...

  13. Video Surveillance using a Multi-Camera Tracking and Fusion System

    OpenAIRE

    Zhang , Zhong; Scanlon , Andrew; Yin , Weihong; Yu , Li; Venetianer , Péter L.

    2008-01-01

    International audience; Usage of intelligent video surveillance (IVS) systems is spreading rapidly. These systems are being utilized in a wide range of applications. In most cases, even in multi-camera installations, the video is processed independently in each feed. This paper describes a system that fuses tracking information from multiple cameras, thus vastly expanding its capabilities. The fusion relies on all cameras being calibrated to a site map, while the individual sensors remain lar...

  14. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Mariana Rampinelli

    2014-08-01

    Full Text Available This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  15. An intelligent space for mobile robot localization using a multi-camera system.

    Science.gov (United States)

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  16. Advanced 3-D Ultrasound Imaging

    DEFF Research Database (Denmark)

    Rasmussen, Morten Fischer

    The main purpose of the PhD project was to develop methods that increase the 3-D ultrasound imaging quality available for the medical personnel in the clinic. Acquiring a 3-D volume gives the medical doctor the freedom to investigate the measured anatomy in any slice desirable after the scan has...... been completed. This allows for precise measurements of organs dimensions and makes the scan more operator independent. Real-time 3-D ultrasound imaging is still not as widespread in use in the clinics as 2-D imaging. A limiting factor has traditionally been the low image quality achievable using...... a channel limited 2-D transducer array and the conventional 3-D beamforming technique, Parallel Beamforming. The first part of the scientific contributions demonstrate that 3-D synthetic aperture imaging achieves a better image quality than the Parallel Beamforming technique. Data were obtained using both...

  17. A new method for real-time co-registration of 3D coronary angiography and intravascular ultrasound or optical coherence tomography.

    Science.gov (United States)

    Carlier, Stéphane; Didday, Rich; Slots, Tristan; Kayaert, Peter; Sonck, Jeroen; El-Mourad, Mike; Preumont, Nicolas; Schoors, Dany; Van Camp, Guy

    2014-06-01

    We present a new clinically practical method for online co-registration of 3D quantitative coronary angiography (QCA) and intravascular ultrasound (IVUS) or optical coherence tomography (OCT). The workflow is based on two modified commercially available software packages. Reconstruction steps are explained and compared to previously available methods. The feasibility for different clinical scenarios is illustrated. The co-registration appears accurate, robust and induced a minimal delay on the normal cath lab activities. This new method is based on the 3D angiographic reconstruction of the catheter path and does not require operator's identification of landmarks to establish the image synchronization. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. A new method for real-time co-registration of 3D coronary angiography and intravascular ultrasound or optical coherence tomography

    International Nuclear Information System (INIS)

    Carlier, Stéphane; Didday, Rich; Slots, Tristan; Kayaert, Peter; Sonck, Jeroen; El-Mourad, Mike; Preumont, Nicolas; Schoors, Dany; Van Camp, Guy

    2014-01-01

    We present a new clinically practical method for online co-registration of 3D quantitative coronary angiography (QCA) and intravascular ultrasound (IVUS) or optical coherence tomography (OCT). The workflow is based on two modified commercially available software packages. Reconstruction steps are explained and compared to previously available methods. The feasibility for different clinical scenarios is illustrated. The co-registration appears accurate, robust and induced a minimal delay on the normal cath lab activities. This new method is based on the 3D angiographic reconstruction of the catheter path and does not require operator’s identification of landmarks to establish the image synchronization

  19. A new method for real-time co-registration of 3D coronary angiography and intravascular ultrasound or optical coherence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Carlier, Stéphane, E-mail: sgcarlier@hotmail.com [Department of Cardiology, Universitair Ziekenhuis - UZ Brussel, Brussels (Belgium); Department of Cardiology, Erasme University Hospital, Université Libre de Bruxelles (ULB), Brussels (Belgium); Didday, Rich [INDEC Medical Systems Inc., Santa Clara, CA (United States); Slots, Tristan [Pie Medical Imaging BV, Maastricht (Netherlands); Kayaert, Peter; Sonck, Jeroen [Department of Cardiology, Universitair Ziekenhuis - UZ Brussel, Brussels (Belgium); El-Mourad, Mike; Preumont, Nicolas [Department of Cardiology, Erasme University Hospital, Université Libre de Bruxelles (ULB), Brussels (Belgium); Schoors, Dany; Van Camp, Guy [Department of Cardiology, Universitair Ziekenhuis - UZ Brussel, Brussels (Belgium)

    2014-06-15

    We present a new clinically practical method for online co-registration of 3D quantitative coronary angiography (QCA) and intravascular ultrasound (IVUS) or optical coherence tomography (OCT). The workflow is based on two modified commercially available software packages. Reconstruction steps are explained and compared to previously available methods. The feasibility for different clinical scenarios is illustrated. The co-registration appears accurate, robust and induced a minimal delay on the normal cath lab activities. This new method is based on the 3D angiographic reconstruction of the catheter path and does not require operator’s identification of landmarks to establish the image synchronization.

  20. Feasibility of integrating a multi-camera optical tracking system in intra-operative electron radiation therapy scenarios

    International Nuclear Information System (INIS)

    García-Vázquez, V; Marinetto, E; Santos-Miranda, J A; Calvo, F A; Desco, M; Pascau, J

    2013-01-01

    Intra-operative electron radiation therapy (IOERT) combines surgery and ionizing radiation applied directly to an exposed unresected tumour mass or to a post-resection tumour bed. The radiation is collimated and conducted by a specific applicator docked to the linear accelerator. The dose distribution in tissues to be irradiated and in organs at risk can be planned through a pre-operative computed tomography (CT) study. However, surgical retraction of structures and resection of a tumour affecting normal tissues significantly modify the patient's geometry. Therefore, the treatment parameters (applicator dimension, pose (position and orientation), bevel angle, and beam energy) may require the original IOERT treatment plan to be modified depending on the actual surgical scenario. We propose the use of a multi-camera optical tracking system to reliably record the actual pose of the IOERT applicator in relation to the patient's anatomy in an environment prone to occlusion problems. This information can be integrated in the radio-surgical treatment planning system in order to generate a real-time accurate description of the IOERT scenario. We assessed the accuracy of the applicator pose by performing a phantom-based study that resembled three real clinical IOERT scenarios. The error obtained (2 mm) was below the acceptance threshold for external radiotherapy practice, thus encouraging future implementation of this approach in real clinical IOERT scenarios. (paper)

  1. Real-Time Motion Capture Toolbox (RTMocap): an open-source code for recording 3-D motion kinematics to study action-effect anticipations during motor and social interactions.

    Science.gov (United States)

    Lewkowicz, Daniel; Delevoye-Turrell, Yvonne

    2016-03-01

    We present here a toolbox for the real-time motion capture of biological movements that runs in the cross-platform MATLAB environment (The MathWorks, Inc., Natick, MA). It provides instantaneous processing of the 3-D movement coordinates of up to 20 markers at a single instant. Available functions include (1) the setting of reference positions, areas, and trajectories of interest; (2) recording of the 3-D coordinates for each marker over the trial duration; and (3) the detection of events to use as triggers for external reinforcers (e.g., lights, sounds, or odors). Through fast online communication between the hardware controller and RTMocap, automatic trial selection is possible by means of either a preset or an adaptive criterion. Rapid preprocessing of signals is also provided, which includes artifact rejection, filtering, spline interpolation, and averaging. A key example is detailed, and three typical variations are developed (1) to provide a clear understanding of the importance of real-time control for 3-D motion in cognitive sciences and (2) to present users with simple lines of code that can be used as starting points for customizing experiments using the simple MATLAB syntax. RTMocap is freely available (http://sites.google.com/site/RTMocap/) under the GNU public license for noncommercial use and open-source development, together with sample data and extensive documentation.

  2. Real-Time, Multiple, Pan/Tilt/Zoom, Computer Vision Tracking, and 3D Position Estimating System for Unmanned Aerial System Metrology

    Science.gov (United States)

    2013-10-18

    area of 3D point estimation of flapping- wing UASs. The benefits of designing and developing such a system is instrumental in researching various...series of successive states until a given name is reached such as: Object Animate Animal Mammal Dog Labrador Chocolate (Brown) Male Name...are many benefits to us- ing SIFT in tracking. It detects features that are invariant to image scale and rotation, and are shown to provide robust

  3. Marker-referred movement measurement with grey-scale coordinate extraction for high-resolution real-time 3D at 100 Hz

    NARCIS (Netherlands)

    Furnée, E.H.; Jobbá, A.; Sabel, J.C.; Veenendaal, H.L.J. van; Martin, F.; Andriessen, D.C.W.G.

    1997-01-01

    A review of early history in photography highlights the origin of cinefilm as a scientific tool for image-based measurement of human and animal motion. The paper is concerned with scanned-area video sensors (CCD) and a computer interface for the real-time, high-resolution extraction of image

  4. 3D Surgical Simulation

    Science.gov (United States)

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  5. A method for enabling real-time structural deformation in remote handling control system by utilizing offline simulation results and 3D model morphing

    International Nuclear Information System (INIS)

    Kiviranta, Sauli; Saarinen, Hannu; Maekinen, Harri; Krassi, Boris

    2011-01-01

    A full scale physical test facility, DTP2 (Divertor Test Platform 2) has been established in Finland for demonstrating and refining the Remote Handling (RH) equipment designs for ITER. The first prototype RH equipment at DTP2 is the Cassette Multifunctional Mover (CMM) equipped with Second Cassette End Effector (SCEE) delivered to DTP2 in October 2008. The purpose is to prove that CMM/SCEE prototype can be used successfully for the 2nd cassette RH operations. At the end of F4E grant 'DTP2 test facility operation and upgrade preparation', the RH operations of the 2nd cassette were successfully demonstrated to the representatives of Fusion For Energy (F4E). Due to its design, the CMM/SCEE robot has relatively large mechanical flexibilities when the robot carries the nine-ton-weighting 2nd Cassette on the 3.6-m long lever. This leads into a poor absolute accuracy and into the situation where the 3D model, which is used in the control system, does not reflect the actual deformed state of the CMM/SCEE robot. To improve the accuracy, the new method has been developed in order to handle the flexibilities within the control system's virtual environment. The effect of the load on the CMM/SCEE has been measured and minimized in the load compensation model, which is implemented in the control system software. The proposed method accounts for the structural deformations of the robot in the control system through the 3D model morphing by utilizing the finite element method (FEM) analysis for morph targets. This resulted in a considerable improvement of the CMM/SCEE absolute accuracy and the adequacy of the 3D model, which is crucially important in the RH applications, where the visual information of the controlled device in the surrounding environment is limited.

  6. An Indoor Scene Recognition-Based 3D Registration Mechanism for Real-Time AR-GIS Visualization in Mobile Applications

    Directory of Open Access Journals (Sweden)

    Wei Ma

    2018-03-01

    Full Text Available Mobile Augmented Reality (MAR systems are becoming ideal platforms for visualization, permitting users to better comprehend and interact with spatial information. Subsequently, this technological development, in turn, has prompted efforts to enhance mechanisms for registering virtual objects in real world contexts. Most existing AR 3D Registration techniques lack the scene recognition capabilities needed to describe accurately the positioning of virtual objects in scenes representing reality. Moreover, the application of such registration methods in indoor AR-GIS systems is further impeded by the limited capacity of these systems to detect the geometry and semantic information in indoor environments. In this paper, we propose a novel method for fusing virtual objects and indoor scenes, based on indoor scene recognition technology. To accomplish scene fusion in AR-GIS, we first detect key points in reference images. Then, we perform interior layout extraction using a Fully Connected Networks (FCN algorithm to acquire layout coordinate points for the tracking targets. We detect and recognize the target scene in a video frame image to track targets and estimate the camera pose. In this method, virtual 3D objects are fused precisely to a real scene, according to the camera pose and the previously extracted layout coordinate points. Our results demonstrate that this approach enables accurate fusion of virtual objects with representations of real world indoor environments. Based on this fusion technique, users can better grasp virtual three-dimensional representations on an AR-GIS platform.

  7. 3D Animation Essentials

    CERN Document Server

    Beane, Andy

    2012-01-01

    The essential fundamentals of 3D animation for aspiring 3D artists 3D is everywhere--video games, movie and television special effects, mobile devices, etc. Many aspiring artists and animators have grown up with 3D and computers, and naturally gravitate to this field as their area of interest. Bringing a blend of studio and classroom experience to offer you thorough coverage of the 3D animation industry, this must-have book shows you what it takes to create compelling and realistic 3D imagery. Serves as the first step to understanding the language of 3D and computer graphics (CG)Covers 3D anim

  8. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  9. TH-AB-202-08: A Robust Real-Time Surface Reconstruction Method On Point Clouds Captured From a 3D Surface Photogrammetry System

    International Nuclear Information System (INIS)

    Liu, W; Sawant, A; Ruan, D

    2016-01-01

    Purpose: Surface photogrammetry (e.g. VisionRT, C-Rad) provides a noninvasive way to obtain high-frequency measurement for patient motion monitoring in radiotherapy. This work aims to develop a real-time surface reconstruction method on the acquired point clouds, whose acquisitions are subject to noise and missing measurements. In contrast to existing surface reconstruction methods that are usually computationally expensive, the proposed method reconstructs continuous surfaces with comparable accuracy in real-time. Methods: The key idea in our method is to solve and propagate a sparse linear relationship from the point cloud (measurement) manifold to the surface (reconstruction) manifold, taking advantage of the similarity in local geometric topology in both manifolds. With consistent point cloud acquisition, we propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, building the point correspondences by the iterative closest point (ICP) method. To accommodate changing noise levels and/or presence of inconsistent occlusions, we further propose a modified sparse regression (MSR) model to account for the large and sparse error built by ICP, with a Laplacian prior. We evaluated our method on both clinical acquired point clouds under consistent conditions and simulated point clouds with inconsistent occlusions. The reconstruction accuracy was evaluated w.r.t. root-mean-squared-error, by comparing the reconstructed surfaces against those from the variational reconstruction method. Results: On clinical point clouds, both the SR and MSR models achieved sub-millimeter accuracy, with mean reconstruction time reduced from 82.23 seconds to 0.52 seconds and 0.94 seconds, respectively. On simulated point cloud with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent performance despite the introduced occlusions. Conclusion: We have developed a real-time

  10. An embedded real-time red peach detection system based on an OV7670 camera, ARM Cortex-M4 processor and 3D Look-Up Tables

    OpenAIRE

    Teixidó Cairol, Mercè; Font Calafell, Davinia; Pallejà Cabrè, Tomàs; Tresánchez Ribes, Marcel; Nogués Aymamí, Miquel; Palacín Roca, Jordi

    2012-01-01

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future...

  11. Interaction Control Protocols for Distributed Multi-user Multi-camera Environments

    Directory of Open Access Journals (Sweden)

    Gareth W Daniel

    2003-10-01

    Full Text Available Video-centred communication (e.g., video conferencing, multimedia online learning, traffic monitoring, and surveillance is becoming a customary activity in our lives. The management of interactions in such an environment is a complicated HCI issue. In this paper, we present our study on a collection of interaction control protocols for distributed multiuser multi-camera environments. These protocols facilitate different approaches to managing a user's entitlement for controlling a particular camera. We describe a web-based system that allows multiple users to manipulate multiple cameras in varying remote locations. The system was developed using the Java framework, and all protocols discussed have been incorporated into the system. Experiments were designed and conducted to evaluate the effectiveness of these protocols, and to enable the identification of various human factors in a distributed multi-user and multi-camera environment. This work provides an insight into the complexity associated with the interaction management in video-centred communication. It can also serve as a conceptual and experimental framework for further research in this area.

  12. 3-D Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon

    , if this significant reduction in the element count can still provide precise and robust 3-D vector flow estimates in a plane. The study concludes that the RC array is capable of estimating precise 3-D vector flow both in a plane and in a volume, despite the low channel count. However, some inherent new challenges...... ultrasonic vector flow estimation and bring it a step closer to a clinical application. A method for high frame rate 3-D vector flow estimation in a plane using the transverse oscillation method combined with a 1024 channel 2-D matrix array is presented. The proposed method is validated both through phantom...... hampers the task of real-time processing. In a second study, some of the issue with the 2-D matrix array are solved by introducing a 2-D row-column (RC) addressing array with only 62 + 62 elements. It is investigated both through simulations and via experimental setups in various flow conditions...

  13. EUROPEANA AND 3D

    Directory of Open Access Journals (Sweden)

    D. Pletinckx

    2012-09-01

    Full Text Available The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  14. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    Science.gov (United States)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  15. Computational imaging with multi-camera time-of-flight systems

    KAUST Repository

    Shrestha, Shikhar

    2016-07-11

    Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design and applications of phased multi-camera time-of-flight (ToF) systems. We develop a reproducible hardware system that allows for the exposure times and waveforms of up to three cameras to be synchronized. Using this system, we analyze waveform interference between multiple light sources in ToF applications and propose simple solutions to this problem. Building on the concept of orthogonal frequency design, we demonstrate state-of-the-art results for instantaneous radial velocity capture via Doppler time-of-flight imaging and we explore new directions for optically probing global illumination, for example by de-scattering dynamic scenes and by non-line-of-sight motion detection via frequency gating. © 2016 ACM.

  16. VideoWeb Dataset for Multi-camera Activities and Non-verbal Communication

    Science.gov (United States)

    Denina, Giovanni; Bhanu, Bir; Nguyen, Hoang Thanh; Ding, Chong; Kamal, Ahmed; Ravishankar, Chinya; Roy-Chowdhury, Amit; Ivers, Allen; Varda, Brenda

    Human-activity recognition is one of the most challenging problems in computer vision. Researchers from around the world have tried to solve this problem and have come a long way in recognizing simple motions and atomic activities. As the computer vision community heads toward fully recognizing human activities, a challenging and labeled dataset is needed. To respond to that need, we collected a dataset of realistic scenarios in a multi-camera network environment (VideoWeb) involving multiple persons performing dozens of different repetitive and non-repetitive activities. This chapter describes the details of the dataset. We believe that this VideoWeb Activities dataset is unique and it is one of the most challenging datasets available today. The dataset is publicly available online at http://vwdata.ee.ucr.edu/ along with the data annotation.

  17. Eyes on the Earth 3D

    Science.gov (United States)

    Kulikov, anton I.; Doronila, Paul R.; Nguyen, Viet T.; Jackson, Randal K.; Greene, William M.; Hussey, Kevin J.; Garcia, Christopher M.; Lopez, Christian A.

    2013-01-01

    Eyes on the Earth 3D software gives scientists, and the general public, a realtime, 3D interactive means of accurately viewing the real-time locations, speed, and values of recently collected data from several of NASA's Earth Observing Satellites using a standard Web browser (climate.nasa.gov/eyes). Anyone with Web access can use this software to see where the NASA fleet of these satellites is now, or where they will be up to a year in the future. The software also displays several Earth Science Data sets that have been collected on a daily basis. This application uses a third-party, 3D, realtime, interactive game engine called Unity 3D to visualize the satellites and is accessible from a Web browser.

  18. LandSIM3D: modellazione in real time 3D di dati geografici

    Directory of Open Access Journals (Sweden)

    Lambo Srl Lambo Srl

    2009-03-01

    Full Text Available LandSIM3D: realtime 3D modelling of geographic data LandSIM3D allows to model in 3D an existing landscape in a few hours only and geo-referenced offering great landscape analysis and understanding tools. 3D projects can then be inserted into the existing landscape with ease and precision. The project alternatives and impact can then be visualized and studied into their immediate environmental. The complex evolution of the landscape in the future can also be simulated and the landscape model can be manipulated interactively and better shared with colleagues. For that reason, LandSIM3D is different from traditional 3D imagery solutions, normally reserved for computer graphics experts. For more information about LandSIM3D, go to www.landsim3d.com.

  19. Refined 3d-3d correspondence

    Energy Technology Data Exchange (ETDEWEB)

    Alday, Luis F.; Genolini, Pietro Benetti; Bullimore, Mathew; Loon, Mark van [Mathematical Institute, University of Oxford, Andrew Wiles Building,Radcliffe Observatory Quarter, Woodstock Road, Oxford, OX2 6GG (United Kingdom)

    2017-04-28

    We explore aspects of the correspondence between Seifert 3-manifolds and 3d N=2 supersymmetric theories with a distinguished abelian flavour symmetry. We give a prescription for computing the squashed three-sphere partition functions of such 3d N=2 theories constructed from boundary conditions and interfaces in a 4d N=2{sup ∗} theory, mirroring the construction of Seifert manifold invariants via Dehn surgery. This is extended to include links in the Seifert manifold by the insertion of supersymmetric Wilson-’t Hooft loops in the 4d N=2{sup ∗} theory. In the presence of a mass parameter for the distinguished flavour symmetry, we recover aspects of refined Chern-Simons theory with complex gauge group, and in particular construct an analytic continuation of the S-matrix of refined Chern-Simons theory.

  20. A 3d-3d appetizer

    Energy Technology Data Exchange (ETDEWEB)

    Pei, Du; Ye, Ke [Walter Burke Institute for Theoretical Physics, California Institute of Technology, Pasadena, CA, 91125 (United States)

    2016-11-02

    We test the 3d-3d correspondence for theories that are labeled by Lens spaces. We find a full agreement between the index of the 3d N=2 “Lens space theory” T[L(p,1)] and the partition function of complex Chern-Simons theory on L(p,1). In particular, for p=1, we show how the familiar S{sup 3} partition function of Chern-Simons theory arises from the index of a free theory. For large p, we find that the index of T[L(p,1)] becomes a constant independent of p. In addition, we study T[L(p,1)] on the squashed three-sphere S{sub b}{sup 3}. This enables us to see clearly, at the level of partition function, to what extent G{sub ℂ} complex Chern-Simons theory can be thought of as two copies of Chern-Simons theory with compact gauge group G.

  1. 3D virtuel udstilling

    DEFF Research Database (Denmark)

    Tournay, Bruno; Rüdiger, Bjarne

    2006-01-01

    3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s.......3d digital model af Arkitektskolens gård med virtuel udstilling af afgangsprojekter fra afgangen sommer 2006. 10 s....

  2. Underwater 3D filming

    Directory of Open Access Journals (Sweden)

    Roberto Rinaldi

    2014-12-01

    Full Text Available After an experimental phase of many years, 3D filming is now effective and successful. Improvements are still possible, but the film industry achieved memorable success on 3D movie’s box offices due to the overall quality of its products. Special environments such as space (“Gravity” and the underwater realm look perfect to be reproduced in 3D. “Filming in space” was possible in “Gravity” using special effects and computer graphic. The underwater realm is still difficult to be handled. Underwater filming in 3D was not that easy and effective as filming in 2D, since not long ago. After almost 3 years of research, a French, Austrian and Italian team realized a perfect tool to film underwater, in 3D, without any constrains. This allows filmmakers to bring the audience deep inside an environment where they most probably will never have the chance to be.

  3. NEW METHOD FOR THE CALIBRATION OF MULTI-CAMERA MOBILE MAPPING SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. P. Kersting

    2012-07-01

    Full Text Available Mobile Mapping Systems (MMS allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS, which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP: the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data. In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.

  4. Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system.

    Science.gov (United States)

    Shieh, Wann-Yun; Huang, Ju-Chin

    2012-09-01

    For most elderly, unpredictable falling incidents may occur at the corner of stairs or a long corridor due to body frailty. If we delay to rescue a falling elder who is likely fainting, more serious consequent injury may occur. Traditional secure or video surveillance systems need caregivers to monitor a centralized screen continuously, or need an elder to wear sensors to detect falling incidents, which explicitly waste much human power or cause inconvenience for elders. In this paper, we propose an automatic falling-detection algorithm and implement this algorithm in a multi-camera video surveillance system. The algorithm uses each camera to fetch the images from the regions required to be monitored. It then uses a falling-pattern recognition algorithm to determine if a falling incident has occurred. If yes, system will send short messages to someone needs to be noticed. The algorithm has been implemented in a DSP-based hardware acceleration board for functionality proof. Simulation results show that the accuracy of falling detection can achieve at least 90% and the throughput of a four-camera surveillance system can be improved by about 2.1 times. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. New Method for the Calibration of Multi-Camera Mobile Mapping Systems

    Science.gov (United States)

    Kersting, A. P.; Habib, A.; Rau, J.

    2012-07-01

    Mobile Mapping Systems (MMS) allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS), which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP): the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data). In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO) where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.

  6. 3D histomorphometric quantification from 3D computed tomography

    International Nuclear Information System (INIS)

    Oliveira, L.F. de; Lopes, R.T.

    2004-01-01

    The histomorphometric analysis is based on stereologic concepts and was originally applied to biologic samples. This technique has been used to evaluate different complex structures such as ceramic filters, net structures and cancellous objects that are objects with inner connected structures. The measured histomorphometric parameters of structure are: sample volume to total reconstructed volume (BV/TV), sample surface to sample volume (BS/BV), connection thickness (Tb Th ), connection number (Tb N ) and connection separation (Tb Sp ). The anisotropy was evaluated as well. These parameters constitute the base of histomorphometric analysis. The quantification is realized over cross-sections recovered by cone beam reconstruction, where a real-time microfocus radiographic system is used as tomographic system. The three-dimensional (3D) histomorphometry, obtained from tomography, corresponds to an evolution of conventional method that is based on 2D analysis. It is more coherent with morphologic and topologic context of the sample. This work shows result from 3D histomorphometric quantification to characterize objects examined by 3D computer tomography. The results, which characterizes the internal structures of ceramic foams with different porous density, are compared to results from conventional methods

  7. Underwater 3D filming

    OpenAIRE

    Rinaldi, Roberto

    2014-01-01

    After an experimental phase of many years, 3D filming is now effective and successful. Improvements are still possible, but the film industry achieved memorable success on 3D movie’s box offices due to the overall quality of its products. Special environments such as space (“Gravity” ) and the underwater realm look perfect to be reproduced in 3D. “Filming in space” was possible in “Gravity” using special effects and computer graphic. The underwater realm is still difficult to be handled. Unde...

  8. Blender 3D cookbook

    CERN Document Server

    Valenza, Enrico

    2015-01-01

    This book is aimed at the professionals that already have good 3D CGI experience with commercial packages and have now decided to try the open source Blender and want to experiment with something more complex than the average tutorials on the web. However, it's also aimed at the intermediate Blender users who simply want to go some steps further.It's taken for granted that you already know how to move inside the Blender interface, that you already have 3D modeling knowledge, and also that of basic 3D modeling and rendering concepts, for example, edge-loops, n-gons, or samples. In any case, it'

  9. DELTA 3D PRINTER

    Directory of Open Access Journals (Sweden)

    ȘOVĂILĂ Florin

    2016-07-01

    Full Text Available 3D printing is a very used process in industry, the generic name being “rapid prototyping”. The essential advantage of a 3D printer is that it allows the designers to produce a prototype in a very short time, which is tested and quickly remodeled, considerably reducing the required time to get from the prototype phase to the final product. At the same time, through this technique we can achieve components with very precise forms, complex pieces that, through classical methods, could have been accomplished only in a large amount of time. In this paper, there are presented the stages of a 3D model execution, also the physical achievement after of a Delta 3D printer after the model.

  10. Making 3D movies of Northern Lights

    Science.gov (United States)

    Hivon, Eric; Mouette, Jean; Legault, Thierry

    2017-10-01

    We describe the steps necessary to create three-dimensional (3D) movies of Northern Lights or Aurorae Borealis out of real-time images taken with two distant high-resolution fish-eye cameras. Astrometric reconstruction of the visible stars is used to model the optical mapping of each camera and correct for it in order to properly align the two sets of images. Examples of the resulting movies can be seen at http://www.iap.fr/aurora3d

  11. Professional Papervision3D

    CERN Document Server

    Lively, Michael

    2010-01-01

    Professional Papervision3D describes how Papervision3D works and how real world applications are built, with a clear look at essential topics such as building websites and games, creating virtual tours, and Adobe's Flash 10. Readers learn important techniques through hands-on applications, and build on those skills as the book progresses. The companion website contains all code examples, video step-by-step explanations, and a collada repository.

  12. Design and implementation of real-time multi-sensor vision systems

    CERN Document Server

    Popovic, Vladan; Cogal, Ömer; Akin, Abdulkadir; Leblebici, Yusuf

    2017-01-01

    This book discusses the design of multi-camera systems and their application to fields such as the virtual reality, gaming, film industry, medicine, automotive industry, drones, etc.The authors cover the basics of image formation, algorithms for stitching a panoramic image from multiple cameras, and multiple real-time hardware system architectures, in order to have panoramic videos. Several specific applications of multi-camera systems are presented, such as depth estimation, high dynamic range imaging, and medical imaging.

  13. Wearable 3D measurement

    Science.gov (United States)

    Manabe, Yoshitsugu; Imura, Masataka; Tsuchiya, Masanobu; Yasumuro, Yoshihiro; Chihara, Kunihiro

    2003-01-01

    Wearable 3D measurement realizes to acquire 3D information of an objects or an environment using a wearable computer. Recently, we can send voice and sound as well as pictures by mobile phone in Japan. Moreover it will become easy to capture and send data of short movie by it. On the other hand, the computers become compact and high performance. And it can easy connect to Internet by wireless LAN. Near future, we can use the wearable computer always and everywhere. So we will be able to send the three-dimensional data that is measured by wearable computer as a next new data. This paper proposes the measurement method and system of three-dimensional data of an object with the using of wearable computer. This method uses slit light projection for 3D measurement and user"s motion instead of scanning system.

  14. 3D Digital Modelling

    DEFF Research Database (Denmark)

    Hundebøl, Jesper

    wave of new building information modelling tools demands further investigation, not least because of industry representatives' somewhat coarse parlance: Now the word is spreading -3D digital modelling is nothing less than a revolution, a shift of paradigm, a new alphabet... Research qeustions. Based...... on empirical probes (interviews, observations, written inscriptions) within the Danish construction industry this paper explores the organizational and managerial dynamics of 3D Digital Modelling. The paper intends to - Illustrate how the network of (non-)human actors engaged in the promotion (and arrest) of 3...... important to appreciate the analysis. Before turning to the presentation of preliminary findings and a discussion of 3D digital modelling, it begins, however, with an outline of industry specific ICT strategic issues. Paper type. Multi-site field study...

  15. 3D ARCHITECTURAL VIDEOMAPPING

    Directory of Open Access Journals (Sweden)

    R. Catanese

    2013-07-01

    Full Text Available 3D architectural mapping is a video projection technique that can be done with a survey of a chosen building in order to realize a perfect correspondence between its shapes and the images in projection. As a performative kind of audiovisual artifact, the real event of the 3D mapping is a combination of a registered video animation file with a real architecture. This new kind of visual art is becoming very popular and its big audience success testifies new expressive chances in the field of urban design. My case study has been experienced in Pisa for the Luminara feast in 2012.

  16. Interaktiv 3D design

    DEFF Research Database (Denmark)

    Villaume, René Domine; Ørstrup, Finn Rude

    2002-01-01

    Projektet undersøger potentialet for interaktiv 3D design via Internettet. Arkitekt Jørn Utzons projekt til Espansiva blev udviklet som et byggesystem med det mål, at kunne skabe mangfoldige planmuligheder og mangfoldige facade- og rumudformninger. Systemets bygningskomponenter er digitaliseret som...... 3D elementer og gjort tilgængelige. Via Internettet er det nu muligt at sammenstille og afprøve en uendelig  række bygningstyper som  systemet blev tænkt og udviklet til....

  17. 3D Projection Installations

    DEFF Research Database (Denmark)

    Halskov, Kim; Johansen, Stine Liv; Bach Mikkelsen, Michelle

    2014-01-01

    Three-dimensional projection installations are particular kinds of augmented spaces in which a digital 3-D model is projected onto a physical three-dimensional object, thereby fusing the digital content and the physical object. Based on interaction design research and media studies, this article ...... Fingerplan to Loop City, is a 3-D projection installation presenting the history and future of city planning for the Copenhagen area in Denmark. The installation was presented as part of the 12th Architecture Biennale in Venice in 2010....

  18. Herramientas SIG 3D

    Directory of Open Access Journals (Sweden)

    Francisco R. Feito Higueruela

    2010-04-01

    Full Text Available Applications of Geographical Information Systems on several Archeology fields have been increasing during the last years. Recent avances in these technologies make possible to work with more realistic 3D models. In this paper we introduce a new paradigm for this system, the GIS Thetrahedron, in which we define the fundamental elements of GIS, in order to provide a better understanding of their capabilities. At the same time the basic 3D characteristics of some comercial and open source software are described, as well as the application to some samples on archeological researchs

  19. Bootstrapping 3D fermions

    Energy Technology Data Exchange (ETDEWEB)

    Iliesiu, Luca [Joseph Henry Laboratories, Princeton University, Princeton, NJ 08544 (United States); Kos, Filip; Poland, David [Department of Physics, Yale University, New Haven, CT 06520 (United States); Pufu, Silviu S. [Joseph Henry Laboratories, Princeton University, Princeton, NJ 08544 (United States); Simmons-Duffin, David [School of Natural Sciences, Institute for Advanced Study, Princeton, NJ 08540 (United States); Yacoby, Ran [Joseph Henry Laboratories, Princeton University, Princeton, NJ 08544 (United States)

    2016-03-17

    We study the conformal bootstrap for a 4-point function of fermions 〈ψψψψ〉 in 3D. We first introduce an embedding formalism for 3D spinors and compute the conformal blocks appearing in fermion 4-point functions. Using these results, we find general bounds on the dimensions of operators appearing in the ψ×ψ OPE, and also on the central charge C{sub T}. We observe features in our bounds that coincide with scaling dimensions in the Gross-Neveu models at large N. We also speculate that other features could coincide with a fermionic CFT containing no relevant scalar operators.

  20. 3D Display of Spacecraft Dynamics Using Real Telemetry

    Directory of Open Access Journals (Sweden)

    Sanguk Lee

    2002-12-01

    Full Text Available 3D display of spacecraft motion by using telemetry data received from satellite in real-time is described. Telemetry data are converted to the appropriate form for 3-D display by the real-time preprocessor. Stored playback telemetry data also can be processed for the display. 3D display of spacecraft motion by using real telemetry data provides intuitive comprehension of spacecraft dynamics.

  1. SU-G-JeP2-04: Comparison Between Fricke-Type 3D Radiochromic Dosimeters for Real-Time Dose Distribution Measurements in MR-Guided Radiation Therapy

    International Nuclear Information System (INIS)

    Lee, H; Alqathami, M; Wang, J; Ibbott, G; Kadbi, M; Blencowe, A

    2016-01-01

    Purpose: To assess MR signal contrast for different ferrous ion compounds used in Fricke-type gel dosimeters for real-time dose measurements for MR-guided radiation therapy applications. Methods: Fricke-type gel dosimeters were prepared in 4% w/w gelatin prior to irradiation in an integrated 1.5 T MRI and 7 MV linear accelerator system (MR-Linac). 4 different ferrous ion (Fe2?) compounds (referred to as A, B, C, and D) were investigated for this study. Dosimeter D consisted of ferrous ammonium sulfate (FAS), which is conventionally used for Fricke dosimeters. Approximately half of each cylindrical dosimeter (45 mm diameter, 80 mm length) was irradiated to ∼17 Gy. MR imaging during irradiation was performed with the MR-Linac using a balanced-FFE sequence of TR/TE = 5/2.4 ms. An approximate uncertainty of 5% in our dose delivery was anticipated since the MR-Linac had not yet been fully commissioned. Results: The signal intensities (SI) increased between the un-irradiated and irradiated regions by approximately 8.6%, 4.4%, 3.2%, and 4.3% after delivery of ∼2.8 Gy for dosimeters A, B, C, and D, respectively. After delivery of ∼17 Gy, the SI had increased by 24.4%, 21.0%, 3.1%, and 22.2% compared to the un-irradiated regions. The increase in SI with respect to dose was linear for dosimeters A, B, and D with slopes of 0.0164, 0.0251, and 0.0236 Gy"−"1 (R"2 = 0.92, 0.97, and 0.96), respectively. Visually, dosimeter A had the greatest optical contrast from yellow to purple in the irradiated region. Conclusion: This study demonstrated the feasibility of using Fricke-type dosimeters for real-time dose measurements with the greatest optical and MR contrast for dosimeter A. We also demonstrated the need to investigate Fe"2"+ compounds beyond the conventionally utilized FAS compound in order to improve the MR signal contrast in 3D dosimeters used for MR-guided radiation therapy. This material is based upon work supported by the National Science Foundation Graduate

  2. SU-G-JeP2-04: Comparison Between Fricke-Type 3D Radiochromic Dosimeters for Real-Time Dose Distribution Measurements in MR-Guided Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Lee, H; Alqathami, M; Wang, J; Ibbott, G [UT MD Anderson Cancer Center, Houston, TX (United States); Kadbi, M [MR Therapy, Philips healthTech, Cleveland, OH (United States); Blencowe, A [The University of South Australia, South Australia, SA (Australia)

    2016-06-15

    Purpose: To assess MR signal contrast for different ferrous ion compounds used in Fricke-type gel dosimeters for real-time dose measurements for MR-guided radiation therapy applications. Methods: Fricke-type gel dosimeters were prepared in 4% w/w gelatin prior to irradiation in an integrated 1.5 T MRI and 7 MV linear accelerator system (MR-Linac). 4 different ferrous ion (Fe2?) compounds (referred to as A, B, C, and D) were investigated for this study. Dosimeter D consisted of ferrous ammonium sulfate (FAS), which is conventionally used for Fricke dosimeters. Approximately half of each cylindrical dosimeter (45 mm diameter, 80 mm length) was irradiated to ∼17 Gy. MR imaging during irradiation was performed with the MR-Linac using a balanced-FFE sequence of TR/TE = 5/2.4 ms. An approximate uncertainty of 5% in our dose delivery was anticipated since the MR-Linac had not yet been fully commissioned. Results: The signal intensities (SI) increased between the un-irradiated and irradiated regions by approximately 8.6%, 4.4%, 3.2%, and 4.3% after delivery of ∼2.8 Gy for dosimeters A, B, C, and D, respectively. After delivery of ∼17 Gy, the SI had increased by 24.4%, 21.0%, 3.1%, and 22.2% compared to the un-irradiated regions. The increase in SI with respect to dose was linear for dosimeters A, B, and D with slopes of 0.0164, 0.0251, and 0.0236 Gy{sup −1} (R{sup 2} = 0.92, 0.97, and 0.96), respectively. Visually, dosimeter A had the greatest optical contrast from yellow to purple in the irradiated region. Conclusion: This study demonstrated the feasibility of using Fricke-type dosimeters for real-time dose measurements with the greatest optical and MR contrast for dosimeter A. We also demonstrated the need to investigate Fe{sup 2+} compounds beyond the conventionally utilized FAS compound in order to improve the MR signal contrast in 3D dosimeters used for MR-guided radiation therapy. This material is based upon work supported by the National Science Foundation

  3. Shaping 3-D boxes

    DEFF Research Database (Denmark)

    Stenholt, Rasmus; Madsen, Claus B.

    2011-01-01

    Enabling users to shape 3-D boxes in immersive virtual environments is a non-trivial problem. In this paper, a new family of techniques for creating rectangular boxes of arbitrary position, orientation, and size is presented and evaluated. These new techniques are based solely on position data...

  4. 3D Wire 2015

    DEFF Research Database (Denmark)

    Jordi, Moréton; F, Escribano; J. L., Farias

    This document is a general report on the implementation of gamification in 3D Wire 2015 event. As the second gamification experience in this event, we have delved deeply in the previous objectives (attracting public areas less frequented exhibition in previous years and enhance networking) and have...

  5. 3D Harmonic Echocardiography:

    NARCIS (Netherlands)

    M.M. Voormolen (Marco)

    2007-01-01

    textabstractThree dimensional (3D) echocardiography has recently developed from an experimental technique in the ’90 towards an imaging modality for the daily clinical practice. This dissertation describes the considerations, implementation, validation and clinical application of a unique

  6. 3D Surgical Simulation

    OpenAIRE

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2010-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive ...

  7. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  8. Development of a real time multiple target, multi camera tracker for civil security applications

    Science.gov (United States)

    Åkerlund, Hans

    2009-09-01

    A surveillance system has been developed that can use multiple TV-cameras to detect and track personnel and objects in real time in public areas. The document describes the development and the system setup. The system is called NIVS Networked Intelligent Video Surveillance. Persons in the images are tracked and displayed on a 3D map of the surveyed area.

  9. 3D Laser Scanner for Underwater Manipulation

    Directory of Open Access Journals (Sweden)

    Albert Palomer

    2018-04-01

    Full Text Available Nowadays, research in autonomous underwater manipulation has demonstrated simple applications like picking an object from the sea floor, turning a valve or plugging and unplugging a connector. These are fairly simple tasks compared with those already demonstrated by the mobile robotics community, which include, among others, safe arm motion within areas populated with a priori unknown obstacles or the recognition and location of objects based on their 3D model to grasp them. Kinect-like 3D sensors have contributed significantly to the advance of mobile manipulation providing 3D sensing capabilities in real-time at low cost. Unfortunately, the underwater robotics community is lacking a 3D sensor with similar capabilities to provide rich 3D information of the work space. In this paper, we present a new underwater 3D laser scanner and demonstrate its capabilities for underwater manipulation. In order to use this sensor in conjunction with manipulators, a calibration method to find the relative position between the manipulator and the 3D laser scanner is presented. Then, two different advanced underwater manipulation tasks beyond the state of the art are demonstrated using two different manipulation systems. First, an eight Degrees of Freedom (DoF fixed-base manipulator system is used to demonstrate arm motion within a work space populated with a priori unknown fixed obstacles. Next, an eight DoF free floating Underwater Vehicle-Manipulator System (UVMS is used to autonomously grasp an object from the bottom of a water tank.

  10. 3D Laser Scanner for Underwater Manipulation.

    Science.gov (United States)

    Palomer, Albert; Ridao, Pere; Youakim, Dina; Ribas, David; Forest, Josep; Petillot, Yvan

    2018-04-04

    Nowadays, research in autonomous underwater manipulation has demonstrated simple applications like picking an object from the sea floor, turning a valve or plugging and unplugging a connector. These are fairly simple tasks compared with those already demonstrated by the mobile robotics community, which include, among others, safe arm motion within areas populated with a priori unknown obstacles or the recognition and location of objects based on their 3D model to grasp them. Kinect-like 3D sensors have contributed significantly to the advance of mobile manipulation providing 3D sensing capabilities in real-time at low cost. Unfortunately, the underwater robotics community is lacking a 3D sensor with similar capabilities to provide rich 3D information of the work space. In this paper, we present a new underwater 3D laser scanner and demonstrate its capabilities for underwater manipulation. In order to use this sensor in conjunction with manipulators, a calibration method to find the relative position between the manipulator and the 3D laser scanner is presented. Then, two different advanced underwater manipulation tasks beyond the state of the art are demonstrated using two different manipulation systems. First, an eight Degrees of Freedom (DoF) fixed-base manipulator system is used to demonstrate arm motion within a work space populated with a priori unknown fixed obstacles. Next, an eight DoF free floating Underwater Vehicle-Manipulator System (UVMS) is used to autonomously grasp an object from the bottom of a water tank.

  11. Tangible 3D Modelling

    DEFF Research Database (Denmark)

    Hejlesen, Aske K.; Ovesen, Nis

    2012-01-01

    This paper presents an experimental approach to teaching 3D modelling techniques in an Industrial Design programme. The approach includes the use of tangible free form models as tools for improving the overall learning. The paper is based on lecturer and student experiences obtained through...... facilitated discussions during the course as well as through a survey distributed to the participating students. The analysis of the experiences shows a mixed picture consisting of both benefits and limits to the experimental technique. A discussion about the applicability of the technique and about...

  12. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    Science.gov (United States)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  13. Lumion 3D cookbook

    CERN Document Server

    Cardoso, Ciro

    2014-01-01

    This book offers uses practical applications using recipes with step-by-step instructions and useful information to help you master how to produce professional architectural visualizations in Lumion. The cookbook approach means you need to think and explore how a particular feature can be applied in your project and perform the intended task. This book is written to be accessible to all Lumion users and is a useful guide to follow when becoming familiar with this cutting-edge real-time technology. This practical guide is designed for all levels of Lumion users who know how to model buildings i

  14. 3D-FPA Hybridization Improvements, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced Scientific Concepts, Inc. (ASC) is a small business, which has developed a compact, eye-safe 3D Flash LIDARTM Camera (FLC) well suited for real-time...

  15. Mobile 3D tomograph

    International Nuclear Information System (INIS)

    Illerhaus, Bernhard; Goebbels, Juergen; Onel, Yener; Sauerwein, Christoph

    2008-01-01

    Mobile tomographs often have the problem that high spatial resolution is impossible owing to the position or setup of the tomograph. While the tree tomograph developed by Messrs. Isotopenforschung Dr. Sauerwein GmbH worked well in practice, it is no longer used as the spatial resolution and measuring time are insufficient for many modern applications. The paper shows that the mechanical base of the method is sufficient for 3D CT measurements with modern detectors and X-ray tubes. CT measurements with very good statistics take less than 10 min. This means that mobile systems can be used, e.g. in examinations of non-transportable cultural objects or monuments. Enhancement of the spatial resolution of mobile tomographs capable of measuring in any position is made difficult by the fact that the tomograph has moving parts and will therefore have weight shifts. With the aid of tomographies whose spatial resolution is far higher than the mechanical accuracy, a correction method is presented for direct integration of the Feldkamp algorithm [de

  16. 3D Printing and 3D Bioprinting in Pediatrics.

    Science.gov (United States)

    Vijayavenkataraman, Sanjairaj; Fuh, Jerry Y H; Lu, Wen Feng

    2017-07-13

    Additive manufacturing, commonly referred to as 3D printing, is a technology that builds three-dimensional structures and components layer by layer. Bioprinting is the use of 3D printing technology to fabricate tissue constructs for regenerative medicine from cell-laden bio-inks. 3D printing and bioprinting have huge potential in revolutionizing the field of tissue engineering and regenerative medicine. This paper reviews the application of 3D printing and bioprinting in the field of pediatrics.

  17. 3D Printing and 3D Bioprinting in Pediatrics

    OpenAIRE

    Vijayavenkataraman, Sanjairaj; Fuh, Jerry Y H; Lu, Wen Feng

    2017-01-01

    Additive manufacturing, commonly referred to as 3D printing, is a technology that builds three-dimensional structures and components layer by layer. Bioprinting is the use of 3D printing technology to fabricate tissue constructs for regenerative medicine from cell-laden bio-inks. 3D printing and bioprinting have huge potential in revolutionizing the field of tissue engineering and regenerative medicine. This paper reviews the application of 3D printing and bioprinting in the field of pediatrics.

  18. 3D display system using monocular multiview displays

    Science.gov (United States)

    Sakamoto, Kunio; Saruta, Kazuki; Takeda, Kazutoki

    2002-05-01

    A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have researched the virtual-reality systems connected with computer networks for real-time remote control and developed a low-priced real-time 3D display for building these systems. We developed a 3D HMD system using monocular multi-view displays. The 3D displaying technique of this monocular multi-view display is based on the concept of the super multi-view proposed by Kajiki at TAO (Telecommunications Advancement Organization of Japan) in 1996. Our 3D HMD has two monocular multi-view displays (used as a visual display unit) in order to display a picture to the left eye and the right eye. The left and right images are a pair of stereoscopic images for the left and right eyes, then stereoscopic 3D images are observed.

  19. Characterization of jellyfish turning using 3D-PTV

    Science.gov (United States)

    Xu, Nicole; Dabiri, John

    2017-11-01

    Aurelia aurita are oblate, radially symmetric jellyfish that consist of a gelatinous bell and subumbrellar muscle ring, which contracts to provide motive force. Swimming is typically modeled as a purely vertical motion; however, asymmetric activations of swim pacemakers (sensory organs that innervate the muscle at eight locations around the bell margin) result in turning and more complicated swim behaviors. More recent studies have examined flow fields around turning jellyfish, but the input/output relationship between locomotive controls and swim trajectories is unclear. To address this, bell kinematics for both straight swimming and turning are obtained using 3D particle tracking velocimetry (3D-PTV) by injecting biocompatible elastomer tags into the bell, illuminating the tank with ultraviolet light, and tracking the resulting fluorescent particles in a multi-camera setup. By understanding these kinematics in both natural and externally controlled free-swimming animals, we can connect neuromuscular control mechanisms to existing flow measurements of jellyfish turning for applications in designing more energy efficient biohybrid robots and underwater vehicles. NSF GRFP.

  20. 3D printing for dummies

    CERN Document Server

    Hausman, Kalani Kirk

    2014-01-01

    Get started printing out 3D objects quickly and inexpensively! 3D printing is no longer just a figment of your imagination. This remarkable technology is coming to the masses with the growing availability of 3D printers. 3D printers create 3-dimensional layered models and they allow users to create prototypes that use multiple materials and colors.  This friendly-but-straightforward guide examines each type of 3D printing technology available today and gives artists, entrepreneurs, engineers, and hobbyists insight into the amazing things 3D printing has to offer. You'll discover methods for

  1. 3D Volume Rendering and 3D Printing (Additive Manufacturing).

    Science.gov (United States)

    Katkar, Rujuta A; Taft, Robert M; Grant, Gerald T

    2018-07-01

    Three-dimensional (3D) volume-rendered images allow 3D insight into the anatomy, facilitating surgical treatment planning and teaching. 3D printing, additive manufacturing, and rapid prototyping techniques are being used with satisfactory accuracy, mostly for diagnosis and surgical planning, followed by direct manufacture of implantable devices. The major limitation is the time and money spent generating 3D objects. Printer type, material, and build thickness are known to influence the accuracy of printed models. In implant dentistry, the use of 3D-printed surgical guides is strongly recommended to facilitate planning and reduce risk of operative complications. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. 3D game environments create professional 3D game worlds

    CERN Document Server

    Ahearn, Luke

    2008-01-01

    The ultimate resource to help you create triple-A quality art for a variety of game worlds; 3D Game Environments offers detailed tutorials on creating 3D models, applying 2D art to 3D models, and clear concise advice on issues of efficiency and optimization for a 3D game engine. Using Photoshop and 3ds Max as his primary tools, Luke Ahearn explains how to create realistic textures from photo source and uses a variety of techniques to portray dynamic and believable game worlds.From a modern city to a steamy jungle, learn about the planning and technological considerations for 3D modelin

  3. Multi-camera synchronization core implemented on USB3 based FPGA platform

    Science.gov (United States)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  4. UCalMiCeL – UNIFIED INTRINSIC AND EXTRINSIC CALIBRATION OF A MULTI-CAMERA-SYSTEM AND A LASERSCANNER

    Directory of Open Access Journals (Sweden)

    M. Hillemann

    2017-08-01

    Full Text Available Unmanned Aerial Vehicle (UAV with adequate sensors enable new applications in the scope between expensive, large-scale, aircraftcarried remote sensing and time-consuming, small-scale, terrestrial surveyings. To perform these applications, cameras and laserscanners are a good sensor combination, due to their complementary properties. To exploit this sensor combination the intrinsics and relative poses of the individual cameras and the relative poses of the cameras and the laserscanners have to be known. In this manuscript, we present a calibration methodology for the Unified Intrinsic and Extrinsic Calibration of a Multi-Camera-System and a Laserscanner (UCalMiCeL. The innovation of this methodology, which is an extension to the calibration of a single camera to a line laserscanner, is an unifying bundle adjustment step to ensure an optimal calibration of the entire sensor system. We use generic camera models, including pinhole, omnidirectional and fisheye cameras. For our approach, the laserscanner and each camera have to share a joint field of view, whereas the fields of view of the individual cameras may be disjoint. The calibration approach is tested with a sensor system consisting of two fisheye cameras and a line laserscanner with a range measuring accuracy of 30 mm. We evaluate the estimated relative poses between the cameras quantitatively by using an additional calibration approach for Multi-Camera-Systems based on control points which are accurately measured by a motion capture system. In the experiments, our novel calibration method achieves a relative pose estimation with a deviation below 1.8° and 6.4 mm.

  5. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks

    Directory of Open Access Journals (Sweden)

    Cuicui Zhang

    2014-12-01

    Full Text Available Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the “small sample size” (SSS problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1 how to define diverse base classifiers from the small data; (2 how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0–1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  6. Generic Learning-Based Ensemble Framework for Small Sample Size Face Recognition in Multi-Camera Networks.

    Science.gov (United States)

    Zhang, Cuicui; Liang, Xuefeng; Matsuyama, Takashi

    2014-12-08

    Multi-camera networks have gained great interest in video-based surveillance systems for security monitoring, access control, etc. Person re-identification is an essential and challenging task in multi-camera networks, which aims to determine if a given individual has already appeared over the camera network. Individual recognition often uses faces as a trial and requires a large number of samples during the training phrase. This is difficult to fulfill due to the limitation of the camera hardware system and the unconstrained image capturing conditions. Conventional face recognition algorithms often encounter the "small sample size" (SSS) problem arising from the small number of training samples compared to the high dimensionality of the sample space. To overcome this problem, interest in the combination of multiple base classifiers has sparked research efforts in ensemble methods. However, existing ensemble methods still open two questions: (1) how to define diverse base classifiers from the small data; (2) how to avoid the diversity/accuracy dilemma occurring during ensemble. To address these problems, this paper proposes a novel generic learning-based ensemble framework, which augments the small data by generating new samples based on a generic distribution and introduces a tailored 0-1 knapsack algorithm to alleviate the diversity/accuracy dilemma. More diverse base classifiers can be generated from the expanded face space, and more appropriate base classifiers are selected for ensemble. Extensive experimental results on four benchmarks demonstrate the higher ability of our system to cope with the SSS problem compared to the state-of-the-art system.

  7. The Future Is 3D

    Science.gov (United States)

    Carter, Luke

    2015-01-01

    3D printers are a way of producing a 3D model of an item from a digital file. The model builds up in successive layers of material placed by the printer controlled by the information in the computer file. In this article the author argues that 3D printers are one of the greatest technological advances of recent times. He discusses practical uses…

  8. The 3D additivist cookbook

    NARCIS (Netherlands)

    Allahyari, Morehshin; Rourke, Daniel; Rasch, Miriam

    The 3D Additivist Cookbook, devised and edited by Morehshin Allahyari & Daniel Rourke, is a free compendium of imaginative, provocative works from over 100 world-leading artists, activists and theorists. The 3D Additivist Cookbook contains .obj and .stl files for the 3D printer, as well as critical

  9. Demonstration: A smartphone 3D functional brain scanner

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Stopczynski, Arkadiusz; Larsen, Jakob Eg

    We demonstrate a fully portable 3D real-time functional brain scanner consisting of a wireless 14-channel ‘Neuroheadset‘ (Emotiv EPOC) and a Nokia N900 smartphone. The novelty of our system is the ability to perform real-time functional brain imaging on a smartphone device, including stimulus...

  10. A cross-platform solution for light field based 3D telemedicine.

    Science.gov (United States)

    Wang, Gengkun; Xiang, Wei; Pickering, Mark

    2016-03-01

    Current telehealth services are dominated by conventional 2D video conferencing systems, which are limited in their capabilities in providing a satisfactory communication experience due to the lack of realism. The "immersiveness" provided by 3D technologies has the potential to promote telehealth services to a wider range of applications. However, conventional stereoscopic 3D technologies are deficient in many aspects, including low resolution and the requirement for complicated multi-camera setup and calibration, and special glasses. The advent of light field (LF) photography enables us to record light rays in a single shot and provide glasses-free 3D display with continuous motion parallax in a wide viewing zone, which is ideally suited for 3D telehealth applications. As far as our literature review suggests, there have been no reports of 3D telemedicine systems using LF technology. In this paper, we propose a cross-platform solution for a LF-based 3D telemedicine system. Firstly, a novel system architecture based on LF technology is established, which is able to capture the LF of a patient, and provide an immersive 3D display at the doctor site. For 3D modeling, we further propose an algorithm which is able to convert the captured LF to a 3D model with a high level of detail. For the software implementation on different platforms (i.e., desktop, web-based and mobile phone platforms), a cross-platform solution is proposed. Demo applications have been developed for 2D/3D video conferencing, 3D model display and edit, blood pressure and heart rate monitoring, and patient data viewing functions. The demo software can be extended to multi-discipline telehealth applications, such as tele-dentistry, tele-wound and tele-psychiatry. The proposed 3D telemedicine solution has the potential to revolutionize next-generation telemedicine technologies by providing a high quality immersive tele-consultation experience. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. A new omni-directional multi-camera system for high resolution surveillance

    Science.gov (United States)

    Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2014-05-01

    Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.

  12. MAP3D: a media processor approach for high-end 3D graphics

    Science.gov (United States)

    Darsa, Lucia; Stadnicki, Steven; Basoglu, Chris

    1999-12-01

    Equator Technologies, Inc. has used a software-first approach to produce several programmable and advanced VLIW processor architectures that have the flexibility to run both traditional systems tasks and an array of media-rich applications. For example, Equator's MAP1000A is the world's fastest single-chip programmable signal and image processor targeted for digital consumer and office automation markets. The Equator MAP3D is a proposal for the architecture of the next generation of the Equator MAP family. The MAP3D is designed to achieve high-end 3D performance and a variety of customizable special effects by combining special graphics features with high performance floating-point and media processor architecture. As a programmable media processor, it offers the advantages of a completely configurable 3D pipeline--allowing developers to experiment with different algorithms and to tailor their pipeline to achieve the highest performance for a particular application. With the support of Equator's advanced C compiler and toolkit, MAP3D programs can be written in a high-level language. This allows the compiler to successfully find and exploit any parallelism in a programmer's code, thus decreasing the time to market of a given applications. The ability to run an operating system makes it possible to run concurrent applications in the MAP3D chip, such as video decoding while executing the 3D pipelines, so that integration of applications is easily achieved--using real-time decoded imagery for texturing 3D objects, for instance. This novel architecture enables an affordable, integrated solution for high performance 3D graphics.

  13. 3D Spectroscopy in Astronomy

    Science.gov (United States)

    Mediavilla, Evencio; Arribas, Santiago; Roth, Martin; Cepa-Nogué, Jordi; Sánchez, Francisco

    2011-09-01

    Preface; Acknowledgements; 1. Introductory review and technical approaches Martin M. Roth; 2. Observational procedures and data reduction James E. H. Turner; 3. 3D Spectroscopy instrumentation M. A. Bershady; 4. Analysis of 3D data Pierre Ferruit; 5. Science motivation for IFS and galactic studies F. Eisenhauer; 6. Extragalactic studies and future IFS science Luis Colina; 7. Tutorials: how to handle 3D spectroscopy data Sebastian F. Sánchez, Begona García-Lorenzo and Arlette Pécontal-Rousset.

  14. 3D Elevation Program—Virtual USA in 3D

    Science.gov (United States)

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  15. 3D IBFV : Hardware-Accelerated 3D Flow Visualization

    NARCIS (Netherlands)

    Telea, Alexandru; Wijk, Jarke J. van

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique for 2D flow visualization in two main directions. First, we decompose the 3D flow visualization problem in a

  16. 3D IBFV : hardware-accelerated 3D flow visualization

    NARCIS (Netherlands)

    Telea, A.C.; Wijk, van J.J.

    2003-01-01

    We present a hardware-accelerated method for visualizing 3D flow fields. The method is based on insertion, advection, and decay of dye. To this aim, we extend the texture-based IBFV technique presented by van Wijk (2001) for 2D flow visualization in two main directions. First, we decompose the 3D

  17. 3D for Graphic Designers

    CERN Document Server

    Connell, Ellery

    2011-01-01

    Helping graphic designers expand their 2D skills into the 3D space The trend in graphic design is towards 3D, with the demand for motion graphics, animation, photorealism, and interactivity rapidly increasing. And with the meteoric rise of iPads, smartphones, and other interactive devices, the design landscape is changing faster than ever.2D digital artists who need a quick and efficient way to join this brave new world will want 3D for Graphic Designers. Readers get hands-on basic training in working in the 3D space, including product design, industrial design and visualization, modeling, ani

  18. Using 3D in Visualization

    DEFF Research Database (Denmark)

    Wood, Jo; Kirschenbauer, Sabine; Döllner, Jürgen

    2005-01-01

    to display 3D imagery. The extra cartographic degree of freedom offered by using 3D is explored and offered as a motivation for employing 3D in visualization. The use of VR and the construction of virtual environments exploit navigational and behavioral realism, but become most usefil when combined...... with abstracted representations embedded in a 3D space. The interactions between development of geovisualization, the technology used to implement it and the theory surrounding cartographic representation are explored. The dominance of computing technologies, driven particularly by the gaming industry...

  19. Qademah Fault 3D Survey

    KAUST Repository

    Hanafy, Sherif M.

    2014-01-01

    Objective: Collect 3D seismic data at Qademah Fault location to 1. 3D traveltime tomography 2. 3D surface wave migration 3. 3D phase velocity 4. Possible reflection processing Acquisition Date: 26 – 28 September 2014 Acquisition Team: Sherif, Kai, Mrinal, Bowen, Ahmed Acquisition Layout: We used 288 receiver arranged in 12 parallel lines, each line has 24 receiver. Inline offset is 5 m and crossline offset is 10 m. One shot is fired at each receiver location. We use the 40 kgm weight drop as seismic source, with 8 to 15 stacks at each shot location.

  20. 3D Bayesian contextual classifiers

    DEFF Research Database (Denmark)

    Larsen, Rasmus

    2000-01-01

    We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours.......We extend a series of multivariate Bayesian 2-D contextual classifiers to 3-D by specifying a simultaneous Gaussian distribution for the feature vectors as well as a prior distribution of the class variables of a pixel and its 6 nearest 3-D neighbours....

  1. 3-D printers for libraries

    CERN Document Server

    Griffey, Jason

    2014-01-01

    As the maker movement continues to grow and 3-D printers become more affordable, an expanding group of hobbyists is keen to explore this new technology. In the time-honored tradition of introducing new technologies, many libraries are considering purchasing a 3-D printer. Jason Griffey, an early enthusiast of 3-D printing, has researched the marketplace and seen several systems first hand at the Consumer Electronics Show. In this report he introduces readers to the 3-D printing marketplace, covering such topics asHow fused deposition modeling (FDM) printing workBasic terminology such as build

  2. The New Realm of 3-D Vision

    Science.gov (United States)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  3. Illustrative visualization of 3D city models

    Science.gov (United States)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  4. Java 3D Interactive Visualization for Astrophysics

    Science.gov (United States)

    Chae, K.; Edirisinghe, D.; Lingerfelt, E. J.; Guidry, M. W.

    2003-05-01

    We are developing a series of interactive 3D visualization tools that employ the Java 3D API. We have applied this approach initially to a simple 3-dimensional galaxy collision model (restricted 3-body approximation), with quite satisfactory results. Running either as an applet under Web browser control, or as a Java standalone application, this program permits real-time zooming, panning, and 3-dimensional rotation of the galaxy collision simulation under user mouse and keyboard control. We shall also discuss applications of this technology to 3-dimensional visualization for other problems of astrophysical interest such as neutron star mergers and the time evolution of element/energy production networks in X-ray bursts. *Managed by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725.

  5. Abusir 3D survey 2015

    Directory of Open Access Journals (Sweden)

    Yukinori Kawae

    2016-12-01

    Full Text Available In 2015, in collaboration with the Czech Institute of Egyptology, we, a Japanese consortium, initiated the Abusir 3D Survey (A-3DS for the 3D documentation of the site’s pyramids, which have not been updated since the time of the architectural investigations of Vito Maragioglio and Celeste Rinaldi in the 1960s to the 1970s. The first season of our project focused on the exterior of Neferirkare’s pyramid, the largest pyramid at Abusir. By developing a strategic mathematical 3D survey plan, step-by-step 3D documentation to suit specific archaeological needs, and producing a new display method for the 3D data, we successfully measured the dimensions of the pyramid in a cost-effective way.

  6. 3D vector flow imaging

    DEFF Research Database (Denmark)

    Pihl, Michael Johannes

    The main purpose of this PhD project is to develop an ultrasonic method for 3D vector flow imaging. The motivation is to advance the field of velocity estimation in ultrasound, which plays an important role in the clinic. The velocity of blood has components in all three spatial dimensions, yet...... are (vx, vy, vz) = (-0.03, 95, 1.0) ± (9, 6, 1) cm/s compared with the expected (0, 96, 0) cm/s. Afterwards, 3D vector flow images from a cross-sectional plane of the vessel are presented. The out of plane velocities exhibit the expected 2D circular-symmetric parabolic shape. The experimental results...... verify that the 3D TO method estimates the complete 3D velocity vectors, and that the method is suitable for 3D vector flow imaging....

  7. 3D printing in dentistry.

    Science.gov (United States)

    Dawood, A; Marti Marti, B; Sauret-Jackson, V; Darwood, A

    2015-12-01

    3D printing has been hailed as a disruptive technology which will change manufacturing. Used in aerospace, defence, art and design, 3D printing is becoming a subject of great interest in surgery. The technology has a particular resonance with dentistry, and with advances in 3D imaging and modelling technologies such as cone beam computed tomography and intraoral scanning, and with the relatively long history of the use of CAD CAM technologies in dentistry, it will become of increasing importance. Uses of 3D printing include the production of drill guides for dental implants, the production of physical models for prosthodontics, orthodontics and surgery, the manufacture of dental, craniomaxillofacial and orthopaedic implants, and the fabrication of copings and frameworks for implant and dental restorations. This paper reviews the types of 3D printing technologies available and their various applications in dentistry and in maxillofacial surgery.

  8. E3D, 3-D Elastic Seismic Wave Propagation Code

    International Nuclear Information System (INIS)

    Larsen, S.; Harris, D.; Schultz, C.; Maddix, D.; Bakowsky, T.; Bent, L.

    2004-01-01

    1 - Description of program or function: E3D is capable of simulating seismic wave propagation in a 3D heterogeneous earth. Seismic waves are initiated by earthquake, explosive, and/or other sources. These waves propagate through a 3D geologic model, and are simulated as synthetic seismograms or other graphical output. 2 - Methods: The software simulates wave propagation by solving the elasto-dynamic formulation of the full wave equation on a staggered grid. The solution scheme is 4-order accurate in space, 2-order accurate in time

  9. Real-Time 3D Image Guidance Using a Standard LINAC: Measured Motion, Accuracy, and Precision of the First Prospective Clinical Trial of Kilovoltage Intrafraction Monitoring-Guided Gating for Prostate Cancer Radiation Therapy

    DEFF Research Database (Denmark)

    Keall, Paul J; Ng, Jin Aun; Juneja, Prabhjot

    2016-01-01

    for prostate cancer radiation therapy. In this paper we report on the measured motion accuracy and precision using real-time KIM-guided gating. METHODS AND MATERIALS: Imaging and motion information from the first 200 fractions from 6 patient prostate cancer radiation therapy volumetric modulated arc therapy...... treatments were analyzed. A 3-mm/5-second action threshold was used to trigger a gating event where the beam is paused and the couch position adjusted to realign the prostate to the treatment isocenter. To quantify the in vivo accuracy and precision, KIM was compared with simultaneously acquired k...

  10. A QUADTREE ORGANIZATION CONSTRUCTION AND SCHEDULING METHOD FOR URBAN 3D MODEL BASED ON WEIGHT

    OpenAIRE

    C. Yao; G. Peng; Y. Song; M. Duan

    2017-01-01

    The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weigh...

  11. 3-D neutron transport benchmarks

    International Nuclear Information System (INIS)

    Takeda, T.; Ikeda, H.

    1991-03-01

    A set of 3-D neutron transport benchmark problems proposed by the Osaka University to NEACRP in 1988 has been calculated by many participants and the corresponding results are summarized in this report. The results of K eff , control rod worth and region-averaged fluxes for the four proposed core models, calculated by using various 3-D transport codes are compared and discussed. The calculational methods used were: Monte Carlo, Discrete Ordinates (Sn), Spherical Harmonics (Pn), Nodal Transport and others. The solutions of the four core models are quite useful as benchmarks for checking the validity of 3-D neutron transport codes

  12. Handbook of 3D integration

    CERN Document Server

    Garrou , Philip; Ramm , Peter

    2014-01-01

    Edited by key figures in 3D integration and written by top authors from high-tech companies and renowned research institutions, this book covers the intricate details of 3D process technology.As such, the main focus is on silicon via formation, bonding and debonding, thinning, via reveal and backside processing, both from a technological and a materials science perspective. The last part of the book is concerned with assessing and enhancing the reliability of the 3D integrated devices, which is a prerequisite for the large-scale implementation of this emerging technology. Invaluable reading fo

  13. 3D Models of Immunotherapy

    Science.gov (United States)

    This collaborative grant is developing 3D models of both mouse and human biology to investigate aspects of therapeutic vaccination in order to answer key questions relevant to human cancer immunotherapy.

  14. AI 3D Cybug Gaming

    OpenAIRE

    Ahmed, Zeeshan

    2010-01-01

    In this short paper I briefly discuss 3D war Game based on artificial intelligence concepts called AI WAR. Going in to the details, I present the importance of CAICL language and how this language is used in AI WAR. Moreover I also present a designed and implemented 3D War Cybug for AI WAR using CAICL and discus the implemented strategy to defeat its enemies during the game life.

  15. 3D Face Apperance Model

    DEFF Research Database (Denmark)

    Lading, Brian; Larsen, Rasmus; Astrom, K

    2006-01-01

    We build a 3D face shape model, including inter- and intra-shape variations, derive the analytical Jacobian of its resulting 2D rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations......We build a 3D face shape model, including inter- and intra-shape variations, derive the analytical Jacobian of its resulting 2D rendered image, and show example of its fitting performance with light, pose, id, expression and texture variations...

  16. Utility of real-time prospective motion correction (PROMO) for segmentation of cerebral cortex on 3D T1-weighted imaging: Voxel-based morphometry analysis for uncooperative patients

    International Nuclear Information System (INIS)

    Igata, Natsuki; Kakeda, Shingo; Watanabe, Keita; Narimatsu, Hidekuni; Ide, Satoru; Korogi, Yukunori; Nozaki, Atsushi; Rettmann, Dan; Abe, Osamu

    2017-01-01

    To assess the utility of the motion correction method with prospective motion correction (PROMO) in a voxel-based morphometry (VBM) analysis for 'uncooperative' patient populations. High-resolution 3D T1-weighted imaging both with and without PROMO were performed in 33 uncooperative patients with Parkinson's disease (n = 11) or dementia (n = 22). We compared the grey matter (GM) volumes and cortical thickness between the scans with and without PROMO. For the mean total GM volume with the VBM analysis, the scan without PROMO showed a significantly smaller volume than that with PROMO (p < 0.05), which was caused by segmentation problems due to motion during acquisition. The whole-brain VBM analysis showed significant GM volume reductions in some regions in the scans without PROMO (familywise error corrected p < 0.05). In the cortical thickness analysis, the scans without PROMO also showed decreased cortical thickness compared to the scan with PROMO (p < 0.05). Our results with the uncooperative patients indicate that the use of PROMO can reduce misclassification during segmentation of the VBM analyses, although it may not prevent GM volume reduction. (orig.)

  17. Utility of real-time prospective motion correction (PROMO) for segmentation of cerebral cortex on 3D T1-weighted imaging: Voxel-based morphometry analysis for uncooperative patients

    Energy Technology Data Exchange (ETDEWEB)

    Igata, Natsuki; Kakeda, Shingo; Watanabe, Keita; Narimatsu, Hidekuni; Ide, Satoru; Korogi, Yukunori [University of Occupational and Environmental Health School of Medicine, Department of Radiology, Kitakyushu (Japan); Nozaki, Atsushi [MR Applications and Workflow Asia Pacific GE Healthcare Japan, Tokyo (Japan); Rettmann, Dan [MR Applications and Workflow GE Healthcare, Rochester, MN (United States); Abe, Osamu [University of Tokyo, Department of Radiology, Graduate School of Medicine, Tokyo (Japan)

    2017-08-15

    To assess the utility of the motion correction method with prospective motion correction (PROMO) in a voxel-based morphometry (VBM) analysis for 'uncooperative' patient populations. High-resolution 3D T1-weighted imaging both with and without PROMO were performed in 33 uncooperative patients with Parkinson's disease (n = 11) or dementia (n = 22). We compared the grey matter (GM) volumes and cortical thickness between the scans with and without PROMO. For the mean total GM volume with the VBM analysis, the scan without PROMO showed a significantly smaller volume than that with PROMO (p < 0.05), which was caused by segmentation problems due to motion during acquisition. The whole-brain VBM analysis showed significant GM volume reductions in some regions in the scans without PROMO (familywise error corrected p < 0.05). In the cortical thickness analysis, the scans without PROMO also showed decreased cortical thickness compared to the scan with PROMO (p < 0.05). Our results with the uncooperative patients indicate that the use of PROMO can reduce misclassification during segmentation of the VBM analyses, although it may not prevent GM volume reduction. (orig.)

  18. 3D accelerator magnet calculations using MAGNUS-3D

    International Nuclear Information System (INIS)

    Pissanetzky, S.; Miao, Y.

    1989-01-01

    The steady trend towards increased magnetic and geometric complexity in the design of accelerator magnets has caused a need for reliable 3D computer models and a better understanding of the behavior of magnetic system in three dimensions. The capabilities of the MAGNUS-3D family of programs are ideally suited to solve this class of problems and provide insight into 3D effects. MAGNUS-3D can solve any problem of magnetostatics involving permanent magnets, nonlinear ferromagnetic materials and electric conductors. MAGNUS-3D uses the finite element method and the two-scalar-potentials formulation of Maxwell's equations to obtain the solution, which can then be used interactively to obtain tables of field components at specific points or lines, plots of field lines, function graphs representing a field component plotted against a coordinate along any line in space (such as the beam line), and views of the conductors, the mesh and the magnetic bodies. The magnetic quantities that can be calculated include the force or torque on conductors or magnetic parts, the energy, the flux through a specified surface, line integrals of any field component along any line in space, and the average field or potential harmonic coefficients. We describe the programs with emphasis placed on their use for accelerator magnet design, and present an advanced example of actual calculations. (orig.)

  19. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as building, tree, vegetation, and some manmade feature belonging to urban area. The demand of 3D city modeling is increasing day to day for various engineering and non-engineering applications. Generally three main image based approaches are using for virtual 3D city models generation. In first approach, researchers used Sketch based modeling, second method is Procedural grammar based modeling and third approach is Close range photogrammetry based modeling. Literature study shows that till date, there is no complete solution available to create complete 3D city model by using images. These image based methods also have limitations This paper gives a new approach towards image based virtual 3D city modeling by using close range photogrammetry. This approach is divided into three sections. First, data acquisition process, second is 3D data processing, and third is data combination process. In data acquisition process, a multi-camera setup developed and used for video recording of an area. Image frames created from video data. Minimum required and suitable video image frame selected for 3D processing. In second section, based on close range photogrammetric principles and computer vision techniques, 3D model of area created. In third section, this 3D model exported to adding and merging of other pieces of large area. Scaling and alignment of 3D model was done. After applying the texturing and rendering on this model, a final photo-realistic textured 3D model created. This 3D model transferred into walk-through model or in movie form. Most of the processing steps are automatic. So this method is cost effective and less laborious. Accuracy of this model is good. For this research work, study area is the campus of department of civil engineering, Indian Institute of Technology, Roorkee. This campus acts as a prototype for city. Aerial photography is restricted in many country

  20. From 3D view to 3D print

    Science.gov (United States)

    Dima, M.; Farisato, G.; Bergomi, M.; Viotto, V.; Magrin, D.; Greggio, D.; Farinato, J.; Marafatto, L.; Ragazzoni, R.; Piazza, D.

    2014-08-01

    In the last few years 3D printing is getting more and more popular and used in many fields going from manufacturing to industrial design, architecture, medical support and aerospace. 3D printing is an evolution of bi-dimensional printing, which allows to obtain a solid object from a 3D model, realized with a 3D modelling software. The final product is obtained using an additive process, in which successive layers of material are laid down one over the other. A 3D printer allows to realize, in a simple way, very complex shapes, which would be quite difficult to be produced with dedicated conventional facilities. Thanks to the fact that the 3D printing is obtained superposing one layer to the others, it doesn't need any particular work flow and it is sufficient to simply draw the model and send it to print. Many different kinds of 3D printers exist based on the technology and material used for layer deposition. A common material used by the toner is ABS plastics, which is a light and rigid thermoplastic polymer, whose peculiar mechanical properties make it diffusely used in several fields, like pipes production and cars interiors manufacturing. I used this technology to create a 1:1 scale model of the telescope which is the hardware core of the space small mission CHEOPS (CHaracterising ExOPlanets Satellite) by ESA, which aims to characterize EXOplanets via transits observations. The telescope has a Ritchey-Chrétien configuration with a 30cm aperture and the launch is foreseen in 2017. In this paper, I present the different phases for the realization of such a model, focusing onto pros and cons of this kind of technology. For example, because of the finite printable volume (10×10×12 inches in the x, y and z directions respectively), it has been necessary to split the largest parts of the instrument in smaller components to be then reassembled and post-processed. A further issue is the resolution of the printed material, which is expressed in terms of layers

  1. 3D imaging, 3D printing and 3D virtual planning in endodontics.

    Science.gov (United States)

    Shah, Pratik; Chong, B S

    2018-03-01

    The adoption and adaptation of recent advances in digital technology, such as three-dimensional (3D) printed objects and haptic simulators, in dentistry have influenced teaching and/or management of cases involving implant, craniofacial, maxillofacial, orthognathic and periodontal treatments. 3D printed models and guides may help operators plan and tackle complicated non-surgical and surgical endodontic treatment and may aid skill acquisition. Haptic simulators may assist in the development of competency in endodontic procedures through the acquisition of psycho-motor skills. This review explores and discusses the potential applications of 3D printed models and guides, and haptic simulators in the teaching and management of endodontic procedures. An understanding of the pertinent technology related to the production of 3D printed objects and the operation of haptic simulators are also presented.

  2. Development of 3D browsing and interactive web system

    Science.gov (United States)

    Shi, Xiaonan; Fu, Jian; Jin, Chaolin

    2017-09-01

    In the current market, users need to download specific software or plug-ins to browse the 3D model, and browsing the system may be unstable, and it cannot be 3D model interaction issues In order to solve this problem, this paper presents a solution to the interactive browsing of the model in the server-side parsing model, and when the system is applied, the user only needs to input the system URL and upload the 3D model file to operate the browsing The server real-time parsing 3D model, the interactive response speed, these completely follows the user to walk the minimalist idea, and solves the current market block 3D content development question.

  3. Materialedreven 3d digital formgivning

    DEFF Research Database (Denmark)

    Hansen, Flemming Tvede

    2010-01-01

    Formålet med forskningsprojektet er for det første at understøtte keramikeren i at arbejde eksperimenterende med digital formgivning, og for det andet at bidrage til en tværfaglig diskurs om brugen af digital formgivning. Forskningsprojektet fokuserer på 3d formgivning og derved på 3d digital...... formgivning og Rapid Prototyping (RP). RP er en fællesbetegnelse for en række af de teknikker, der muliggør at overføre den digitale form til 3d fysisk form. Forskningsprojektet koncentrerer sig om to overordnede forskningsspørgsmål. Det første handler om, hvordan viden og erfaring indenfor det keramiske...... fagområde kan blive udnyttet i forhold til 3d digital formgivning. Det andet handler om, hvad en sådan tilgang kan bidrage med, og hvordan den kan blive udnyttet i et dynamisk samspil med det keramiske materiale i formgivningen af 3d keramiske artefakter. Materialedreven formgivning er karakteriseret af en...

  4. 3D future internet media

    CERN Document Server

    Dagiuklas, Tasos

    2014-01-01

    This book describes recent innovations in 3D media and technologies, with coverage of 3D media capturing, processing, encoding, and adaptation, networking aspects for 3D Media, and quality of user experience (QoE). The main contributions are based on the results of the FP7 European Projects ROMEO, which focus on new methods for the compression and delivery of 3D multi-view video and spatial audio, as well as the optimization of networking and compression jointly across the Future Internet (www.ict-romeo.eu). The delivery of 3D media to individual users remains a highly challenging problem due to the large amount of data involved, diverse network characteristics and user terminal requirements, as well as the user’s context such as their preferences and location. As the number of visual views increases, current systems will struggle to meet the demanding requirements in terms of delivery of constant video quality to both fixed and mobile users. ROMEO will design and develop hybrid-networking solutions that co...

  5. Novel 3D media technologies

    CERN Document Server

    Dagiuklas, Tasos

    2015-01-01

    This book describes recent innovations in 3D media and technologies, with coverage of 3D media capturing, processing, encoding, and adaptation, networking aspects for 3D Media, and quality of user experience (QoE). The contributions are based on the results of the FP7 European Project ROMEO, which focuses on new methods for the compression and delivery of 3D multi-view video and spatial audio, as well as the optimization of networking and compression jointly across the future Internet. The delivery of 3D media to individual users remains a highly challenging problem due to the large amount of data involved, diverse network characteristics and user terminal requirements, as well as the user’s context such as their preferences and location. As the number of visual views increases, current systems will struggle to meet the demanding requirements in terms of delivery of consistent video quality to fixed and mobile users. ROMEO will present hybrid networking solutions that combine the DVB-T2 and DVB-NGH broadcas...

  6. Annular dynamics of memo3D annuloplasty ring evaluated by 3D transesophageal echocardiography.

    Science.gov (United States)

    Nishi, Hiroyuki; Toda, Koichi; Miyagawa, Shigeru; Yoshikawa, Yasushi; Fukushima, Satsuki; Yoshioka, Daisuke; Sawa, Yoshiki

    2018-04-01

    We assessed the mitral annular motion after mitral valve repair with the Sorin Memo 3D® (Sorin Group Italia S.r.L., Saluggia, Italy), which is a unique complete semirigid annuloplasty ring intended to restore the systolic profile of the mitral annulus while adapting to the physiologic dynamism of the annulus, using transesophageal real-time three-dimensional echocardiography. 17 patients (12 male; mean age 60.4 ± 14.9 years) who underwent mitral annuloplasty using the Memo 3D ring were investigated. Mitral annular motion was assessed using QLAB®version8 allowing for a full evaluation of the mitral annulus dynamics. The mitral annular dimensions were measured throughout the cardiac cycle using 4D MV assessment2® while saddle shape was assessed through sequential measurements by RealView®. Saddle shape configuration of the mitral annulus and posterior and anterior leaflet motion could be observed during systole and diastole. The mitral annular area changed during the cardiac cycle by 5.7 ± 1.8%.The circumference length and diameter also changed throughout the cardiac cycle. The annular height was significantly higher in mid-systole than in mid-diastole (p 3D ring maintained a physiological saddle-shape configuration throughout the cardiac cycle. Real-time three-dimensional echocardiography analysis confirmed the motion and flexibility of the Memo 3D ring upon implantation.

  7. Initial Work on the Characterization of Additive Manufacturing (3D Printing Using Software Image Analysis

    Directory of Open Access Journals (Sweden)

    Jeremy Straub

    2015-04-01

    Full Text Available A current challenge in additive manufacturing (commonly known as 3D printing is the detection of defects. Detection of defects (or the lack thereof in bespoke industrial manufacturing may be safety critical and reduce or eliminate the need for testing of printed objects. In consumer and prototype printing, early defect detection may facilitate the printer taking corrective measures (or pausing printing and alerting a user, preventing the need to re-print objects after the compounding of a small error occurs. This paper considers one approach to defect detection. It characterizes the efficacy of using a multi-camera system and image processing software to assess printing progress (thus detecting completion failure defects and quality. The potential applications and extrapolations of this type of a system are also discussed.

  8. Modification of 3D milling machine to 3D printer

    OpenAIRE

    Taska, Abraham

    2014-01-01

    Tato práce se zabývá přestavbou gravírovací frézky na 3D tiskárnu. V první části se práce zabývá možnými technologiemi 3D tisku a možností jejich využití u přestavby. Dále jsou popsány a vybrány vhodné součásti pro přestavbu. V další části je realizováno řízení ohřevu podložky, trysky a řízení posuvu drátu pomocí softwaru TwinCat od společnosti Beckhoff na průmyslovém počítači. Výsledkem práce by měla být oživená 3D tiskárna. This thesis deals with rebuilding of engraving machine to 3D pri...

  9. Aspects of defects in 3d-3d correspondence

    International Nuclear Information System (INIS)

    Gang, Dongmin; Kim, Nakwoo; Romo, Mauricio; Yamazaki, Masahito

    2016-01-01

    In this paper we study supersymmetric co-dimension 2 and 4 defects in the compactification of the 6d (2,0) theory of type A_N_−_1 on a 3-manifold M. The so-called 3d-3d correspondence is a relation between complexified Chern-Simons theory (with gauge group SL(N,ℂ)) on M and a 3d N=2 theory T_N[M]. We study this correspondence in the presence of supersymmetric defects, which are knots/links inside the 3-manifold. Our study employs a number of different methods: state-integral models for complex Chern-Simons theory, cluster algebra techniques, domain wall theory T[SU(N)], 5d N=2 SYM, and also supergravity analysis through holography. These methods are complementary and we find agreement between them. In some cases the results lead to highly non-trivial predictions on the partition function. Our discussion includes a general expression for the cluster partition function, which can be used to compute in the presence of maximal and certain class of non-maximal punctures when N>2. We also highlight the non-Abelian description of the 3d N=2T_N[M] theory with defect included, when such a description is available. This paper is a companion to our shorter paper http://dx.doi.org/10.1088/1751-8113/49/30/30LT02, which summarizes our main results.

  10. Stereoscopic 3D graphics generation

    Science.gov (United States)

    Li, Zhi; Liu, Jianping; Zan, Y.

    1997-05-01

    Stereoscopic display technology is one of the key techniques of areas such as simulation, multimedia, entertainment, virtual reality, and so on. Moreover, stereoscopic 3D graphics generation is an important part of stereoscopic 3D display system. In this paper, at first, we describe the principle of stereoscopic display and summarize some methods to generate stereoscopic 3D graphics. Secondly, to overcome the problems which came from the methods of user defined models (such as inconvenience, long modifying period and so on), we put forward the vector graphics files defined method. Thus we can design more directly; modify the model simply and easily; generate more conveniently; furthermore, we can make full use of graphics accelerator card and so on. Finally, we discuss the problem of how to speed up the generation.

  11. 3D Printed Bionic Nanodevices.

    Science.gov (United States)

    Kong, Yong Lin; Gupta, Maneesh K; Johnson, Blake N; McAlpine, Michael C

    2016-06-01

    The ability to three-dimensionally interweave biological and functional materials could enable the creation of bionic devices possessing unique and compelling geometries, properties, and functionalities. Indeed, interfacing high performance active devices with biology could impact a variety of fields, including regenerative bioelectronic medicines, smart prosthetics, medical robotics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins, to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature sensitive. This renders most biological platforms incompatible with the fabrication and materials processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid and brittle. A number of strategies have been developed to overcome these dichotomies. One particularly novel approach is the use of extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers a freeform fabrication strategy. This approach addresses the dichotomies presented above by (1) using 3D printing and imaging for customized, hierarchical, and interwoven device architectures; (2) employing nanotechnology as an enabling route for introducing high performance materials, with the potential for exhibiting properties not found in the bulk; and (3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. Further, 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, novel nanomaterial properties, and 'living' platforms may enable next-generation bionic systems. In this review, we highlight this synergistic integration of the unique properties of nanomaterials with the

  12. 3D Printed Bionic Nanodevices

    Science.gov (United States)

    Kong, Yong Lin; Gupta, Maneesh K.; Johnson, Blake N.; McAlpine, Michael C.

    2016-01-01

    Summary The ability to three-dimensionally interweave biological and functional materials could enable the creation of bionic devices possessing unique and compelling geometries, properties, and functionalities. Indeed, interfacing high performance active devices with biology could impact a variety of fields, including regenerative bioelectronic medicines, smart prosthetics, medical robotics, and human-machine interfaces. Biology, from the molecular scale of DNA and proteins, to the macroscopic scale of tissues and organs, is three-dimensional, often soft and stretchable, and temperature sensitive. This renders most biological platforms incompatible with the fabrication and materials processing methods that have been developed and optimized for functional electronics, which are typically planar, rigid and brittle. A number of strategies have been developed to overcome these dichotomies. One particularly novel approach is the use of extrusion-based multi-material 3D printing, which is an additive manufacturing technology that offers a freeform fabrication strategy. This approach addresses the dichotomies presented above by (1) using 3D printing and imaging for customized, hierarchical, and interwoven device architectures; (2) employing nanotechnology as an enabling route for introducing high performance materials, with the potential for exhibiting properties not found in the bulk; and (3) 3D printing a range of soft and nanoscale materials to enable the integration of a diverse palette of high quality functional nanomaterials with biology. Further, 3D printing is a multi-scale platform, allowing for the incorporation of functional nanoscale inks, the printing of microscale features, and ultimately the creation of macroscale devices. This blending of 3D printing, novel nanomaterial properties, and ‘living’ platforms may enable next-generation bionic systems. In this review, we highlight this synergistic integration of the unique properties of nanomaterials with

  13. Ideal 3D asymmetric concentrator

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Botella, Angel [Departamento Fisica Aplicada a los Recursos Naturales, Universidad Politecnica de Madrid, E.T.S.I. de Montes, Ciudad Universitaria s/n, 28040 Madrid (Spain); Fernandez-Balbuena, Antonio Alvarez; Vazquez, Daniel; Bernabeu, Eusebio [Departamento de Optica, Universidad Complutense de Madrid, Fac. CC. Fisicas, Ciudad Universitaria s/n, 28040 Madrid (Spain)

    2009-01-15

    Nonimaging optics is a field devoted to the design of optical components for applications such as solar concentration or illumination. In this field, many different techniques have been used for producing reflective and refractive optical devices, including reverse engineering techniques. In this paper we apply photometric field theory and elliptic ray bundles method to study 3D asymmetric - without rotational or translational symmetry - concentrators, which can be useful components for nontracking solar applications. We study the one-sheet hyperbolic concentrator and we demonstrate its behaviour as ideal 3D asymmetric concentrator. (author)

  14. Markerless 3D Face Tracking

    DEFF Research Database (Denmark)

    Walder, Christian; Breidt, Martin; Bulthoff, Heinrich

    2009-01-01

    We present a novel algorithm for the markerless tracking of deforming surfaces such as faces. We acquire a sequence of 3D scans along with color images at 40Hz. The data is then represented by implicit surface and color functions, using a novel partition-of-unity type method of efficiently...... the scanned surface, using the variation of both shape and color as features in a dynamic energy minimization problem. Our prototype system yields high-quality animated 3D models in correspondence, at a rate of approximately twenty seconds per timestep. Tracking results for faces and other objects...

  15. 3D Terahertz Beam Profiling

    DEFF Research Database (Denmark)

    Pedersen, Pernille Klarskov; Strikwerda, Andrew; Jepsen, Peter Uhd

    2013-01-01

    We present a characterization of THz beams generated in both a two-color air plasma and in a LiNbO3 crystal. Using a commercial THz camera, we record intensity images as a function of distance through the beam waist, from which we extract 2D beam profiles and visualize our measurements into 3D beam...

  16. 3D Printing: Exploring Capabilities

    Science.gov (United States)

    Samuels, Kyle; Flowers, Jim

    2015-01-01

    As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…

  17. 3D Pit Stop Printing

    Science.gov (United States)

    Wright, Lael; Shaw, Daniel; Gaidds, Kimberly; Lyman, Gregory; Sorey, Timothy

    2018-01-01

    Although solving an engineering design project problem with limited resources or structural capabilities of materials can be part of the challenge, students making their own parts can support creativity. The authors of this article found an exciting solution: 3D printers are not only one of several tools for making but also facilitate a creative…

  18. Collaboration system for simulation using commercial Web3D

    International Nuclear Information System (INIS)

    Okamoto, Koji; Ohkubo, Kohei

    2004-01-01

    The Web-3D system has been widely used in the internet. It can show the 3D environment easily and friendly. In order to develop the network collaboration system, the Web-3D system is used as the front end of the visualization tool. The 3D geometries have been transferred from the server using HTTP with the viewpoint, one of the commercialized Web-3D. The simulation results are directly transferred to the client using the TCP/IP socket with JAVA. The viewpoint can be controlled by the JAVA, so the transferred simulation data are displayed on the web, in real-time. The multi-client system enables the visualization of the real-time simulation results with remote site. The same results are shown on the remote web site, simultaneously. This means the remote collaboration can be achievable for the real-time simulation. Also, the system has the feedback system, which control the simulation parameter remotely. In this prototype system, the key feature of the collaboration system are discussed using the viewpoint as the frontend. (author)

  19. Concept of Indoor 3D-Route UAV Scheduling System

    DEFF Research Database (Denmark)

    Khosiawan, Yohanes; Nielsen, Izabela Ewa; Do, Ngoc Ang Dung

    2016-01-01

    environment. On top of that, the multi-source productive best-first-search concept also supports efficient real-time scheduling in response to uncertain events. Without human intervention, the proposed work provides an automatic scheduling system for UAV routing problem in 3D indoor environment....

  20. DYNA3D2000*, Explicit 3-D Hydrodynamic FEM Program

    International Nuclear Information System (INIS)

    Lin, J.

    2002-01-01

    1 - Description of program or function: DYNA3D2000 is a nonlinear explicit finite element code for analyzing 3-D structures and solid continuum. The code is vectorized and available on several computer platforms. The element library includes continuum, shell, beam, truss and spring/damper elements to allow maximum flexibility in modeling physical problems. Many materials are available to represent a wide range of material behavior, including elasticity, plasticity, composites, thermal effects and rate dependence. In addition, DYNA3D has a sophisticated contact interface capability, including frictional sliding, single surface contact and automatic contact generation. 2 - Method of solution: Discretization of a continuous model transforms partial differential equations into algebraic equations. A numerical solution is then obtained by solving these algebraic equations through a direct time marching scheme. 3 - Restrictions on the complexity of the problem: Recent software improvements have eliminated most of the user identified limitations with dynamic memory allocation and a very large format description that has pushed potential problem sizes beyond the reach of most users. The dominant restrictions remain in code execution speed and robustness, which the developers constantly strive to improve

  1. Computer-controlled 3-D treatment delivery

    International Nuclear Information System (INIS)

    Fraass, Benedick A.

    1995-01-01

    -controlled scanned beam treatments will also be discussed. CCRT-related approaches to treatment plan generation and transfer, accelerator control systems, treatment delivery, verification, documentation and charting will also be discussed, including the importance of real-time portal imaging for conformal therapy. The potential benefits of 3-D computer-controlled conformal treatment delivery will be illustrated with results from on-going clinical dose escalation and normal tissue complication studies. Conclusion: A large amount of interest in computer-controlled conformal treatment delivery techniques has developed in recent years. This presentation will attempt to summarize the current status of clinical and research work in 3-D computer-controlled conformal therapy treatment techniques. Particular attention is paid to issues related to implementation and clinical use of this developing treatment modality

  2. 3-D Discrete Analytical Ridgelet Transform

    OpenAIRE

    Helbert , David; Carré , Philippe; Andrès , Éric

    2006-01-01

    International audience; In this paper, we propose an implementation of the 3-D Ridgelet transform: the 3-D discrete analytical Ridgelet transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform. The innovative step is the definition of a discrete 3-D transform with the discrete analytical geometry theory by the construction of 3-D discrete analytical lines in the Fourier domain. We propose two types of 3-D discrete lines:...

  3. 3D pulsed chaos lidar system.

    Science.gov (United States)

    Cheng, Chih-Hao; Chen, Chih-Ying; Chen, Jun-Da; Pan, Da-Kung; Ting, Kai-Ting; Lin, Fan-Yi

    2018-04-30

    We develop an unprecedented 3D pulsed chaos lidar system for potential intelligent machinery applications. Benefited from the random nature of the chaos, conventional CW chaos lidars already possess excellent anti-jamming and anti-interference capabilities and have no range ambiguity. In our system, we further employ self-homodyning and time gating to generate a pulsed homodyned chaos to boost the energy-utilization efficiency. Compared to the original chaos, we show that the pulsed homodyned chaos improves the detection SNR by more than 20 dB. With a sampling rate of just 1.25 GS/s that has a native sampling spacing of 12 cm, we successfully achieve millimeter-level accuracy and precision in ranging. Compared with two commercial lidars tested side-by-side, namely the pulsed Spectroscan and the random-modulation continuous-wave Lidar-lite, the pulsed chaos lidar that is in compliance with the class-1 eye-safe regulation shows significantly better precision and a much longer detection range up to 100 m. Moreover, by employing a 2-axis MEMS mirror for active laser scanning, we also demonstrate real-time 3D imaging with errors of less than 4 mm in depth.

  4. 3D integrated superconducting qubits

    Science.gov (United States)

    Rosenberg, D.; Kim, D.; Das, R.; Yost, D.; Gustavsson, S.; Hover, D.; Krantz, P.; Melville, A.; Racz, L.; Samach, G. O.; Weber, S. J.; Yan, F.; Yoder, J. L.; Kerman, A. J.; Oliver, W. D.

    2017-10-01

    As the field of quantum computing advances from the few-qubit stage to larger-scale processors, qubit addressability and extensibility will necessitate the use of 3D integration and packaging. While 3D integration is well-developed for commercial electronics, relatively little work has been performed to determine its compatibility with high-coherence solid-state qubits. Of particular concern, qubit coherence times can be suppressed by the requisite processing steps and close proximity of another chip. In this work, we use a flip-chip process to bond a chip with superconducting flux qubits to another chip containing structures for qubit readout and control. We demonstrate that high qubit coherence (T1, T2,echo > 20 μs) is maintained in a flip-chip geometry in the presence of galvanic, capacitive, and inductive coupling between the chips.

  5. 3D Printed Robotic Hand

    Science.gov (United States)

    Pizarro, Yaritzmar Rosario; Schuler, Jason M.; Lippitt, Thomas C.

    2013-01-01

    Dexterous robotic hands are changing the way robots and humans interact and use common tools. Unfortunately, the complexity of the joints and actuations drive up the manufacturing cost. Some cutting edge and commercially available rapid prototyping machines now have the ability to print multiple materials and even combine these materials in the same job. A 3D model of a robotic hand was designed using Creo Parametric 2.0. Combining "hard" and "soft" materials, the model was printed on the Object Connex350 3D printer with the purpose of resembling as much as possible the human appearance and mobility of a real hand while needing no assembly. After printing the prototype, strings where installed as actuators to test mobility. Based on printing materials, the manufacturing cost of the hand was $167, significantly lower than other robotic hands without the actuators since they have more complex assembly processes.

  6. Mortars for 3D printing

    Directory of Open Access Journals (Sweden)

    Demyanenko Olga

    2018-01-01

    Full Text Available The paper is aimed at developing scientifically proven compositions of mortars for 3D printing modified by a peat-based admixture with improved operational characteristics. The paper outlines the results of experimental research on hardened cement paste and concrete mixture with the use of modifying admixture MT-600 (thermally modified peat. It is found that strength of hardened cement paste increases at early age when using finely dispersed admixtures, which is the key factor for formation of construction and technical specifications of concrete for 3D printing technologies. The composition of new formations of hardened cement paste modified by MT-600 admixture were obtained, which enabled to suggest the possibility of their physico-chemical interaction while hardening.

  7. Automated 3-D Radiation Mapping

    International Nuclear Information System (INIS)

    Tarpinian, J. E.

    1991-01-01

    This work describes an automated radiation detection and imaging system which combines several state-of-the-art technologies to produce a portable but very powerful visualization tool for planning work in radiation environments. The system combines a radiation detection system, a computerized radiation imaging program, and computerized 3-D modeling to automatically locate and measurements are automatically collected and imaging techniques are used to produce colored, 'isodose' images of the measured radiation fields. The isodose lines from the images are then superimposed over the 3-D model of the area. The final display shows the various components in a room and their associated radiation fields. The use of an automated radiation detection system increases the quality of radiation survey obtained measurements. The additional use of a three-dimensional display allows easier visualization of the area and associated radiological conditions than two-dimensional sketches

  8. Forensic 3D Scene Reconstruction

    International Nuclear Information System (INIS)

    LITTLE, CHARLES Q.; PETERS, RALPH R.; RIGDON, J. BRIAN; SMALL, DANIEL E.

    1999-01-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene

  9. 3D neutron transport modelization

    International Nuclear Information System (INIS)

    Warin, X.

    1996-12-01

    Some nodal methods to solve the transport equation in 3D are presented. Two nodal methods presented at an OCDE congress are described: a first one is a low degree one called RTN0; a second one is a high degree one called BDM1. The two methods can be made faster with a totally consistent DSA. Some results of parallelization show that: 98% of the time is spent in sweeps; transport sweeps are easily parallelized. (K.A.)

  10. 3D Printing A Survey

    Directory of Open Access Journals (Sweden)

    Muhammad Zulkifl Hasan

    2017-08-01

    Full Text Available Solid free fabrication SFF are produced to enhance the printing instrument utilizing distinctive strategies like Piezo spout control multi-spout injet printers or STL arrange utilizing cutting information. The procedure is utilized to diminish the cost and enhance the speed of printing. A few techniques take long at last because of extra process like dry the printing. This study will concentrate on SFFS utilizing UV gum for 3D printing.

  11. 3D neutron transport modelization

    Energy Technology Data Exchange (ETDEWEB)

    Warin, X.

    1996-12-01

    Some nodal methods to solve the transport equation in 3D are presented. Two nodal methods presented at an OCDE congress are described: a first one is a low degree one called RTN0; a second one is a high degree one called BDM1. The two methods can be made faster with a totally consistent DSA. Some results of parallelization show that: 98% of the time is spent in sweeps; transport sweeps are easily parallelized. (K.A.). 10 refs.

  12. Conducting polymer 3D microelectrodes

    DEFF Research Database (Denmark)

    Sasso, Luigi; Vazquez, Patricia; Vedarethinam, Indumathi

    2010-01-01

    Conducting polymer 3D microelectrodes have been fabricated for possible future neurological applications. A combination of micro-fabrication techniques and chemical polymerization methods has been used to create pillar electrodes in polyaniline and polypyrrole. The thin polymer films obtained...... showed uniformity and good adhesion to both horizontal and vertical surfaces. Electrodes in combination with metal/conducting polymer materials have been characterized by cyclic voltammetry and the presence of the conducting polymer film has shown to increase the electrochemical activity when compared...

  13. [Real time 3D echocardiography

    Science.gov (United States)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  14. 3D treatment planning systems.

    Science.gov (United States)

    Saw, Cheng B; Li, Sicong

    2018-01-01

    Three-dimensional (3D) treatment planning systems have evolved and become crucial components of modern radiation therapy. The systems are computer-aided designing or planning softwares that speed up the treatment planning processes to arrive at the best dose plans for the patients undergoing radiation therapy. Furthermore, the systems provide new technology to solve problems that would not have been considered without the use of computers such as conformal radiation therapy (CRT), intensity-modulated radiation therapy (IMRT), and volumetric modulated arc therapy (VMAT). The 3D treatment planning systems vary amongst the vendors and also the dose delivery systems they are designed to support. As such these systems have different planning tools to generate the treatment plans and convert the treatment plans into executable instructions that can be implemented by the dose delivery systems. The rapid advancements in computer technology and accelerators have facilitated constant upgrades and the introduction of different and unique dose delivery systems than the traditional C-arm type medical linear accelerators. The focus of this special issue is to gather relevant 3D treatment planning systems for the radiation oncology community to keep abreast of technology advancement by assess the planning tools available as well as those unique "tricks or tips" used to support the different dose delivery systems. Copyright © 2018 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights reserved.

  15. Compact 3D quantum memory

    Science.gov (United States)

    Xie, Edwar; Deppe, Frank; Renger, Michael; Repp, Daniel; Eder, Peter; Fischer, Michael; Goetz, Jan; Pogorzalek, Stefan; Fedorov, Kirill G.; Marx, Achim; Gross, Rudolf

    2018-05-01

    Superconducting 3D microwave cavities offer state-of-the-art coherence times and a well-controlled environment for superconducting qubits. In order to realize at the same time fast readout and long-lived quantum information storage, one can couple the qubit to both a low-quality readout and a high-quality storage cavity. However, such systems are bulky compared to their less coherent 2D counterparts. A more compact and scalable approach is achieved by making use of the multimode structure of a 3D cavity. In our work, we investigate such a device where a transmon qubit is capacitively coupled to two modes of a single 3D cavity. External coupling is engineered so that the memory mode has an about 100 times larger quality factor than the readout mode. Using an all-microwave second-order protocol, we realize a lifetime enhancement of the stored state over the qubit lifetime by a factor of 6 with a fidelity of approximately 80% determined via quantum process tomography. We also find that this enhancement is not limited by fundamental constraints.

  16. 3D Graphics with Spreadsheets

    Directory of Open Access Journals (Sweden)

    Jan Benacka

    2009-06-01

    Full Text Available In the article, the formulas for orthographic parallel projection of 3D bodies on computer screen are derived using secondary school vector algebra. The spreadsheet implementation is demonstrated in six applications that project bodies with increasing intricacy – a convex body (cube with non-solved visibility, convex bodies (cube, chapel with solved visibility, a coloured convex body (chapel with solved visibility, and a coloured non-convex body (church with solved visibility. The projections are revolvable in horizontal and vertical plane, and they are changeable in size. The examples show an unusual way of using spreadsheets as a 3D computer graphics tool. The applications can serve as a simple introduction to the general principles of computer graphics, to the graphics with spreadsheets, and as a tool for exercising stereoscopic vision. The presented approach is usable at visualising 3D scenes within some topics of secondary school curricula as solid geometry (angles and distances of lines and planes within simple bodies or analytic geometry in space (angles and distances of lines and planes in E3, and even at university level within calculus at visualising graphs of z = f(x,y functions. Examples are pictured.

  17. 3D composite image, 3D MRI, 3D SPECT, hydrocephalus

    International Nuclear Information System (INIS)

    Mito, T.; Shibata, I.; Sugo, N.; Takano, M.; Takahashi, H.

    2002-01-01

    The three-dimensional (3D)SPECT imaging technique we have studied and published for the past several years is an analytical tool that permits visual expression of the cerebral circulation profile in various cerebral diseases. The greatest drawback of SPECT is that the limitation on precision of spacial resolution makes intracranial localization impossible. In 3D SPECT imaging, intracranial volume and morphology may vary with the threshold established. To solve this problem, we have produced complimentarily combined SPECT and helical-CT 3D images by means of general-purpose visualization software for intracranial localization. In hydrocephalus, however, the key subject to be studied is the profile of cerebral circulation around the ventricles of the brain. This suggests that, for displaying the cerebral ventricles in three dimensions, CT is a difficult technique whereas MRI is more useful. For this reason, we attempted to establish the profile of cerebral circulation around the cerebral ventricles by the production of combined 3D images of SPECT and MRI. In patients who had shunt surgery for hydrocephalus, a difference between pre- and postoperative cerebral circulation profiles was assessed by a voxel distribution curve, 3D SPECT images, and combined 3D SPECT and MRI images. As the shunt system in this study, an Orbis-Sigma valve of the automatic cerebrospinal fluid volume adjustment type was used in place of the variable pressure type Medos valve currently in use, because this device requires frequent changes in pressure and a change in pressure may be detected after MRI procedure. The SPECT apparatus used was PRISM3000 of the three-detector type, and 123I-IMP was used as the radionuclide in a dose of 222 MBq. MRI data were collected with an MAGNEXa+2 with a magnetic flux density of 0.5 tesla under the following conditions: field echo; TR 50 msec; TE, 10 msec; flip, 30ueK; 1 NEX; FOV, 23 cm; 1-mm slices; and gapless. 3D images are produced on the workstation TITAN

  18. 3D silicon strip detectors

    International Nuclear Information System (INIS)

    Parzefall, Ulrich; Bates, Richard; Boscardin, Maurizio; Dalla Betta, Gian-Franco; Eckert, Simon; Eklund, Lars; Fleta, Celeste; Jakobs, Karl; Kuehn, Susanne; Lozano, Manuel; Pahn, Gregor; Parkes, Chris; Pellegrini, Giulio; Pennicard, David; Piemonte, Claudio; Ronchin, Sabina; Szumlak, Tomasz; Zoboli, Andrea; Zorzi, Nicola

    2009-01-01

    While the Large Hadron Collider (LHC) at CERN has started operation in autumn 2008, plans for a luminosity upgrade to the Super-LHC (sLHC) have already been developed for several years. This projected luminosity increase by an order of magnitude gives rise to a challenging radiation environment for tracking detectors at the LHC experiments. Significant improvements in radiation hardness are required with respect to the LHC. Using a strawman layout for the new tracker of the ATLAS experiment as an example, silicon strip detectors (SSDs) with short strips of 2-3 cm length are foreseen to cover the region from 28 to 60 cm distance to the beam. These SSD will be exposed to radiation levels up to 10 15 N eq /cm 2 , which makes radiation resistance a major concern for the upgraded ATLAS tracker. Several approaches to increasing the radiation hardness of silicon detectors exist. In this article, it is proposed to combine the radiation hard 3D-design originally conceived for pixel-style applications with the benefits of the established planar technology for strip detectors by using SSDs that have regularly spaced doped columns extending into the silicon bulk under the detector strips. The first 3D SSDs to become available for testing were made in the Single Type Column (STC) design, a technological simplification of the original 3D design. With such 3D SSDs, a small number of prototype sLHC detector modules with LHC-speed front-end electronics as used in the semiconductor tracking systems of present LHC experiments were built. Modules were tested before and after irradiation to fluences of 10 15 N eq /cm 2 . The tests were performed with three systems: a highly focused IR-laser with 5μm spot size to make position-resolved scans of the charge collection efficiency, an Sr 90 β-source set-up to measure the signal levels for a minimum ionizing particle (MIP), and a beam test with 180 GeV pions at CERN. This article gives a brief overview of the results obtained with 3D-STC-modules.

  19. 3D silicon strip detectors

    Energy Technology Data Exchange (ETDEWEB)

    Parzefall, Ulrich [Physikalisches Institut, Universitaet Freiburg, Hermann-Herder-Str. 3, D-79104 Freiburg (Germany)], E-mail: ulrich.parzefall@physik.uni-freiburg.de; Bates, Richard [University of Glasgow, Department of Physics and Astronomy, Glasgow G12 8QQ (United Kingdom); Boscardin, Maurizio [FBK-irst, Center for Materials and Microsystems, via Sommarive 18, 38050 Povo di Trento (Italy); Dalla Betta, Gian-Franco [INFN and Universita' di Trento, via Sommarive 14, 38050 Povo di Trento (Italy); Eckert, Simon [Physikalisches Institut, Universitaet Freiburg, Hermann-Herder-Str. 3, D-79104 Freiburg (Germany); Eklund, Lars; Fleta, Celeste [University of Glasgow, Department of Physics and Astronomy, Glasgow G12 8QQ (United Kingdom); Jakobs, Karl; Kuehn, Susanne [Physikalisches Institut, Universitaet Freiburg, Hermann-Herder-Str. 3, D-79104 Freiburg (Germany); Lozano, Manuel [Instituto de Microelectronica de Barcelona, IMB-CNM, CSIC, Barcelona (Spain); Pahn, Gregor [Physikalisches Institut, Universitaet Freiburg, Hermann-Herder-Str. 3, D-79104 Freiburg (Germany); Parkes, Chris [University of Glasgow, Department of Physics and Astronomy, Glasgow G12 8QQ (United Kingdom); Pellegrini, Giulio [Instituto de Microelectronica de Barcelona, IMB-CNM, CSIC, Barcelona (Spain); Pennicard, David [University of Glasgow, Department of Physics and Astronomy, Glasgow G12 8QQ (United Kingdom); Piemonte, Claudio; Ronchin, Sabina [FBK-irst, Center for Materials and Microsystems, via Sommarive 18, 38050 Povo di Trento (Italy); Szumlak, Tomasz [University of Glasgow, Department of Physics and Astronomy, Glasgow G12 8QQ (United Kingdom); Zoboli, Andrea [INFN and Universita' di Trento, via Sommarive 14, 38050 Povo di Trento (Italy); Zorzi, Nicola [FBK-irst, Center for Materials and Microsystems, via Sommarive 18, 38050 Povo di Trento (Italy)

    2009-06-01

    While the Large Hadron Collider (LHC) at CERN has started operation in autumn 2008, plans for a luminosity upgrade to the Super-LHC (sLHC) have already been developed for several years. This projected luminosity increase by an order of magnitude gives rise to a challenging radiation environment for tracking detectors at the LHC experiments. Significant improvements in radiation hardness are required with respect to the LHC. Using a strawman layout for the new tracker of the ATLAS experiment as an example, silicon strip detectors (SSDs) with short strips of 2-3 cm length are foreseen to cover the region from 28 to 60 cm distance to the beam. These SSD will be exposed to radiation levels up to 10{sup 15}N{sub eq}/cm{sup 2}, which makes radiation resistance a major concern for the upgraded ATLAS tracker. Several approaches to increasing the radiation hardness of silicon detectors exist. In this article, it is proposed to combine the radiation hard 3D-design originally conceived for pixel-style applications with the benefits of the established planar technology for strip detectors by using SSDs that have regularly spaced doped columns extending into the silicon bulk under the detector strips. The first 3D SSDs to become available for testing were made in the Single Type Column (STC) design, a technological simplification of the original 3D design. With such 3D SSDs, a small number of prototype sLHC detector modules with LHC-speed front-end electronics as used in the semiconductor tracking systems of present LHC experiments were built. Modules were tested before and after irradiation to fluences of 10{sup 15}N{sub eq}/cm{sup 2}. The tests were performed with three systems: a highly focused IR-laser with 5{mu}m spot size to make position-resolved scans of the charge collection efficiency, an Sr{sup 90}{beta}-source set-up to measure the signal levels for a minimum ionizing particle (MIP), and a beam test with 180 GeV pions at CERN. This article gives a brief overview of

  20. The Boom in 3D-Printed Sensor Technology

    Science.gov (United States)

    Xu, Yuanyuan; Wu, Xiaoyue; Guo, Xiao; Kong, Bin; Zhang, Min; Qian, Xiang; Mi, Shengli; Sun, Wei

    2017-01-01

    Future sensing applications will include high-performance features, such as toxin detection, real-time monitoring of physiological events, advanced diagnostics, and connected feedback. However, such multi-functional sensors require advancements in sensitivity, specificity, and throughput with the simultaneous delivery of multiple detection in a short time. Recent advances in 3D printing and electronics have brought us closer to sensors with multiplex advantages, and additive manufacturing approaches offer a new scope for sensor fabrication. To this end, we review the recent advances in 3D-printed cutting-edge sensors. These achievements demonstrate the successful application of 3D-printing technology in sensor fabrication, and the selected studies deeply explore the potential for creating sensors with higher performance. Further development of multi-process 3D printing is expected to expand future sensor utility and availability. PMID:28534832

  1. Magmatic Systems in 3-D

    Science.gov (United States)

    Kent, G. M.; Harding, A. J.; Babcock, J. M.; Orcutt, J. A.; Bazin, S.; Singh, S.; Detrick, R. S.; Canales, J. P.; Carbotte, S. M.; Diebold, J.

    2002-12-01

    Multichannel seismic (MCS) images of crustal magma chambers are ideal targets for advanced visualization techniques. In the mid-ocean ridge environment, reflections originating at the melt-lens are well separated from other reflection boundaries, such as the seafloor, layer 2A and Moho, which enables the effective use of transparency filters. 3-D visualization of seismic reflectivity falls into two broad categories: volume and surface rendering. Volumetric-based visualization is an extremely powerful approach for the rapid exploration of very dense 3-D datasets. These 3-D datasets are divided into volume elements or voxels, which are individually color coded depending on the assigned datum value; the user can define an opacity filter to reject plotting certain voxels. This transparency allows the user to peer into the data volume, enabling an easy identification of patterns or relationships that might have geologic merit. Multiple image volumes can be co-registered to look at correlations between two different data types (e.g., amplitude variation with offsets studies), in a manner analogous to draping attributes onto a surface. In contrast, surface visualization of seismic reflectivity usually involves producing "fence" diagrams of 2-D seismic profiles that are complemented with seafloor topography, along with point class data, draped lines and vectors (e.g. fault scarps, earthquake locations and plate-motions). The overlying seafloor can be made partially transparent or see-through, enabling 3-D correlations between seafloor structure and seismic reflectivity. Exploration of 3-D datasets requires additional thought when constructing and manipulating these complex objects. As numbers of visual objects grow in a particular scene, there is a tendency to mask overlapping objects; this clutter can be managed through the effective use of total or partial transparency (i.e., alpha-channel). In this way, the co-variation between different datasets can be investigated

  2. Microscopic 3D measurement of dynamic scene using optimized pulse-width-modulation binary fringe

    Science.gov (United States)

    Hu, Yan; Chen, Qian; Feng, Shijie; Tao, Tianyang; Li, Hui; Zuo, Chao

    2017-10-01

    Microscopic 3-D shape measurement can supply accurate metrology of the delicacy and complexity of MEMS components of the final devices to ensure their proper performance. Fringe projection profilometry (FPP) has the advantages of noncontactness and high accuracy, making it widely used in 3-D measurement. Recently, tremendous advance of electronics development promotes 3-D measurements to be more accurate and faster. However, research about real-time microscopic 3-D measurement is still rarely reported. In this work, we effectively combine optimized binary structured pattern with number-theoretical phase unwrapping algorithm to realize real-time 3-D shape measurement. A slight defocusing of our proposed binary patterns can considerably alleviate the measurement error based on phase-shifting FPP, making the binary patterns have the comparable performance with ideal sinusoidal patterns. Real-time 3-D measurement about 120 frames per second (FPS) is achieved, and experimental result of a vibrating earphone is presented.

  3. Wireless 3D Chocolate Printer

    Directory of Open Access Journals (Sweden)

    FROILAN G. DESTREZA

    2014-02-01

    Full Text Available This study is for the BSHRM Students of Batangas State University (BatStateU ARASOF for the researchers believe that the Wireless 3D Chocolate Printer would be helpful in their degree program especially on making creative, artistic, personalized and decorative chocolate designs. The researchers used the Prototyping model as procedural method for the successful development and implementation of the hardware and software. This method has five phases which are the following: quick plan, quick design, prototype construction, delivery and feedback and communication. This study was evaluated by the BSHRM Students and the assessment of the respondents regarding the software and hardware application are all excellent in terms of Accuracy, Effecitveness, Efficiency, Maintainability, Reliability and User-friendliness. Also, the overall level of acceptability of the design project as evaluated by the respondents is excellent. With regard to the observation about the best raw material to use in 3D printing, the chocolate is good to use as the printed material is slightly distorted,durable and very easy to prepare; the icing is also good to use as the printed material is not distorted and is very durable but consumes time to prepare; the flour is not good as the printed material is distorted, not durable but it is easy to prepare. The computation of the economic viability level of 3d printer with reference to ROI is 37.14%. The recommendation of the researchers in the design project are as follows: adding a cooling system so that the raw material will be more durable, development of a more simplified version and improving the extrusion process wherein the user do not need to stop the printing process just to replace the empty syringe with a new one.

  4. Interactive 3D Mars Visualization

    Science.gov (United States)

    Powell, Mark W.

    2012-01-01

    The Interactive 3D Mars Visualization system provides high-performance, immersive visualization of satellite and surface vehicle imagery of Mars. The software can be used in mission operations to provide the most accurate position information for the Mars rovers to date. When integrated into the mission data pipeline, this system allows mission planners to view the location of the rover on Mars to 0.01-meter accuracy with respect to satellite imagery, with dynamic updates to incorporate the latest position information. Given this information so early in the planning process, rover drivers are able to plan more accurate drive activities for the rover than ever before, increasing the execution of science activities significantly. Scientifically, this 3D mapping information puts all of the science analyses to date into geologic context on a daily basis instead of weeks or months, as was the norm prior to this contribution. This allows the science planners to judge the efficacy of their previously executed science observations much more efficiently, and achieve greater science return as a result. The Interactive 3D Mars surface view is a Mars terrain browsing software interface that encompasses the entire region of exploration for a Mars surface exploration mission. The view is interactive, allowing the user to pan in any direction by clicking and dragging, or to zoom in or out by scrolling the mouse or touchpad. This set currently includes tools for selecting a point of interest, and a ruler tool for displaying the distance between and positions of two points of interest. The mapping information can be harvested and shared through ubiquitous online mapping tools like Google Mars, NASA WorldWind, and Worldwide Telescope.

  5. Integrated optical 3D digital imaging based on DSP scheme

    Science.gov (United States)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  6. Virtual 3-D Facial Reconstruction

    Directory of Open Access Journals (Sweden)

    Martin Paul Evison

    2000-06-01

    Full Text Available Facial reconstructions in archaeology allow empathy with people who lived in the past and enjoy considerable popularity with the public. It is a common misconception that facial reconstruction will produce an exact likeness; a resemblance is the best that can be hoped for. Research at Sheffield University is aimed at the development of a computer system for facial reconstruction that will be accurate, rapid, repeatable, accessible and flexible. This research is described and prototypical 3-D facial reconstructions are presented. Interpolation models simulating obesity, ageing and ethnic affiliation are also described. Some strengths and weaknesses in the models, and their potential for application in archaeology are discussed.

  7. Realtime 3D stress measurement in curing epoxy packaging

    DEFF Research Database (Denmark)

    Richter, Jacob; Hyldgård, A.; Birkelund, Karen

    2007-01-01

    This paper presents a novel method to characterize stress in microsystem packaging. A circular p-type piezoresistor is implemented on a (001) silicon chip. We use the circular stress sensor to determine the packaging induced stress in a polystyrene tube filled with epoxy. The epoxy curing process...

  8. Real-Time 3D Sonar Modeling And Visualization

    Science.gov (United States)

    1998-06-01

    looking back towards Manta sonar beam, Manta plus sonar from 1000m off track. 185 NUWC sponsor Erik Chaum Principal investigator Don Brutzman...USN Sonar Officer LT Kevin Byrne USN Intelligence Officer CPT Russell Storms USA Erik Chaum works in NUWC Code 22. He supervised the design and...McGhee, Bob, "The Phoenix Autonomous Underwater Vehicle," chapter 13, AI-BasedMobile Robots, editors David Kortenkamp, Pete Bonasso and Robin Murphy

  9. Irrlicht 17 Realtime 3D Engine Beginner's Guide

    CERN Document Server

    Stein, Johannes

    2011-01-01

    A beginner's guide with plenty of screenshots and explained code. If you have C++ skills and are interested in learning Irrlicht, this book is for you. Absolutely no knowledge of Irrlicht is necessary for you to follow this book!

  10. Analysis of 3-D images

    Science.gov (United States)

    Wani, M. Arif; Batchelor, Bruce G.

    1992-03-01

    Deriving generalized representation of 3-D objects for analysis and recognition is a very difficult task. Three types of representations based on type of an object is used in this paper. Objects which have well-defined geometrical shapes are segmented by using a fast edge region based segmentation technique. The segmented image is represented by plan and elevation of each part of the object if the object parts are symmetrical about their central axis. The plan and elevation concept enables representing and analyzing such objects quickly and efficiently. The second type of representation is used for objects having parts which are not symmetrical about their central axis. The segmented surface patches of such objects are represented by the 3-D boundary and the surface features of each segmented surface. Finally, the third type of representation is used for objects which don't have well-defined geometrical shapes (for example a loaf of bread). These objects are represented and analyzed from its features which are derived using a multiscale contour based technique. Anisotropic Gaussian smoothing technique is introduced to segment the contours at various scales of smoothing. A new merging technique is used which enables getting the current best estimate of break points at each scale. This new technique enables elimination of loss of accuracy of localization effects at coarser scales without using scale space tracking approach.

  11. 3D Printed Bionic Ears

    Science.gov (United States)

    Mannoor, Manu S.; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A.; Soboyejo, Winston O.; Verma, Naveen; Gracias, David H.; McAlpine, Michael C.

    2013-01-01

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the precise anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing. PMID:23635097

  12. 3D DNA Origami Crystals.

    Science.gov (United States)

    Zhang, Tao; Hartl, Caroline; Frank, Kilian; Heuer-Jungemann, Amelie; Fischer, Stefan; Nickels, Philipp C; Nickel, Bert; Liedl, Tim

    2018-05-18

    3D crystals assembled entirely from DNA provide a route to design materials on a molecular level and to arrange guest particles in predefined lattices. This requires design schemes that provide high rigidity and sufficiently large open guest space. A DNA-origami-based "tensegrity triangle" structure that assembles into a 3D rhombohedral crystalline lattice with an open structure in which 90% of the volume is empty space is presented here. Site-specific placement of gold nanoparticles within the lattice demonstrates that these crystals are spacious enough to efficiently host 20 nm particles in a cavity size of 1.83 × 10 5 nm 3 , which would also suffice to accommodate ribosome-sized macromolecules. The accurate assembly of the DNA origami lattice itself, as well as the precise incorporation of gold particles, is validated by electron microscopy and small-angle X-ray scattering experiments. The results show that it is possible to create DNA building blocks that assemble into lattices with customized geometry. Site-specific hosting of nano objects in the optically transparent DNA lattice sets the stage for metamaterial and structural biology applications. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. 3D printed bionic ears.

    Science.gov (United States)

    Mannoor, Manu S; Jiang, Ziwen; James, Teena; Kong, Yong Lin; Malatesta, Karen A; Soboyejo, Winston O; Verma, Naveen; Gracias, David H; McAlpine, Michael C

    2013-06-12

    The ability to three-dimensionally interweave biological tissue with functional electronics could enable the creation of bionic organs possessing enhanced functionalities over their human counterparts. Conventional electronic devices are inherently two-dimensional, preventing seamless multidimensional integration with synthetic biology, as the processes and materials are very different. Here, we present a novel strategy for overcoming these difficulties via additive manufacturing of biological cells with structural and nanoparticle derived electronic elements. As a proof of concept, we generated a bionic ear via 3D printing of a cell-seeded hydrogel matrix in the anatomic geometry of a human ear, along with an intertwined conducting polymer consisting of infused silver nanoparticles. This allowed for in vitro culturing of cartilage tissue around an inductive coil antenna in the ear, which subsequently enables readout of inductively-coupled signals from cochlea-shaped electrodes. The printed ear exhibits enhanced auditory sensing for radio frequency reception, and complementary left and right ears can listen to stereo audio music. Overall, our approach suggests a means to intricately merge biologic and nanoelectronic functionalities via 3D printing.

  14. RELAP5-3D User Problems

    International Nuclear Information System (INIS)

    Riemke, Richard Allan

    2001-01-01

    The Reactor Excursion and Leak Analysis Program with 3D capability (RELAP5-3D) is a reactor system analysis code that has been developed at the Idaho National Engineering and Environmental Laboratory (INEEL) for the U. S. Department of Energy (DOE). The 3D capability in RELAP5-3D includes 3D hydrodynamics and 3D neutron kinetics. Assessment, verification, and validation of the 3D capability in RELAP5-3D is discussed in the literature. Additional assessment, verification, and validation of the 3D capability of RELAP5-3D will be presented in other papers in this users seminar. As with any software, user problems occur. User problems usually fall into the categories of input processing failure, code execution failure, restart/renodalization failure, unphysical result, and installation. This presentation will discuss some of the more generic user problems that have been reported on RELAP5-3D as well as their resolution

  15. LOTT RANCH 3D PROJECT

    International Nuclear Information System (INIS)

    Larry Lawrence; Bruce Miller

    2004-01-01

    The Lott Ranch 3D seismic prospect located in Garza County, Texas is a project initiated in September of 1991 by the J.M. Huber Corp., a petroleum exploration and production company. By today's standards the 126 square mile project does not seem monumental, however at the time it was conceived it was the most intensive land 3D project ever attempted. Acquisition began in September of 1991 utilizing GEO-SEISMIC, INC., a seismic data contractor. The field parameters were selected by J.M. Huber, and were of a radical design. The recording instruments used were GeoCor IV amplifiers designed by Geosystems Inc., which record the data in signed bit format. It would not have been practical, if not impossible, to have processed the entire raw volume with the tools available at that time. The end result was a dataset that was thought to have little utility due to difficulties in processing the field data. In 1997, Yates Energy Corp. located in Roswell, New Mexico, formed a partnership to further develop the project. Through discussions and meetings with Pinnacle Seismic, it was determined that the original Lott Ranch 3D volume could be vastly improved upon reprocessing. Pinnacle Seismic had shown the viability of improving field-summed signed bit data on smaller 2D and 3D projects. Yates contracted Pinnacle Seismic Ltd. to perform the reprocessing. This project was initiated with high resolution being a priority. Much of the potential resolution was lost through the initial summing of the field data. Modern computers that are now being utilized have tremendous speed and storage capacities that were cost prohibitive when this data was initially processed. Software updates and capabilities offer a variety of quality control and statics resolution, which are pertinent to the Lott Ranch project. The reprocessing effort was very successful. The resulting processed data-set was then interpreted using modern PC-based interpretation and mapping software. Production data, log data

  16. 3D biometrics systems and applications

    CERN Document Server

    Zhang, David

    2013-01-01

    Includes discussions on popular 3D imaging technologies, combines them with biometric applications, and then presents real 3D biometric systems Introduces many efficient 3D feature extraction, matching, and fusion algorithms Techniques presented have been supported by experimental results using various 3D biometric classifications

  17. Telerobotics and 3-d TV

    International Nuclear Information System (INIS)

    Able, E.

    1990-01-01

    This paper reports on the development of telerobotic techniques that can be used in the nuclear industry. The approach has been to apply available equipment, modify available equipment, or design and build anew. The authors have successfully built an input controller which can be used with standard industrial robots, converting them into telerobots. A clean room industrial robot has been re-engineered into an advanced telerobot engineered for the nuclear industry, using a knowledge of radiation tolerance design principles and collaboration with the manufacturer. A powerful hydraulic manipulator has been built to respond to a need for more heavy duty devices for in-cell handling. A variety of easy to use 3-D TV systems has been developed

  18. Conducting Polymer 3D Microelectrodes

    Directory of Open Access Journals (Sweden)

    Jenny Emnéus

    2010-12-01

    Full Text Available Conducting polymer 3D microelectrodes have been fabricated for possible future neurological applications. A combination of micro-fabrication techniques and chemical polymerization methods has been used to create pillar electrodes in polyaniline and polypyrrole. The thin polymer films obtained showed uniformity and good adhesion to both horizontal and vertical surfaces. Electrodes in combination with metal/conducting polymer materials have been characterized by cyclic voltammetry and the presence of the conducting polymer film has shown to increase the electrochemical activity when compared with electrodes coated with only metal. An electrochemical characterization of gold/polypyrrole electrodes showed exceptional electrochemical behavior and activity. PC12 cells were finally cultured on the investigated materials as a preliminary biocompatibility assessment. These results show that the described electrodes are possibly suitable for future in-vitro neurological measurements.

  19. Embedding complex objects with 3d printing

    KAUST Repository

    Hussain, Muhammad Mustafa

    2017-10-12

    A CMOS technology-compatible fabrication process for flexible CMOS electronics embedded during additive manufacturing (i.e. 3D printing). A method for such a process may include printing a first portion of a 3D structure; pausing the step of printing the 3D structure to embed the flexible silicon substrate; placing the flexible silicon substrate in a cavity of the first portion of the 3D structure to embed the flexible silicon substrate in the 3D structure; and resuming the step of printing the 3D structure to form the second portion of the 3D structure.

  20. Supernova Remnant in 3-D

    Science.gov (United States)

    2009-01-01

    of the wavelength shift is related to the speed of motion, one can determine how fast the debris are moving in either direction. Because Cas A is the result of an explosion, the stellar debris is expanding radially outwards from the explosion center. Using simple geometry, the scientists were able to construct a 3-D model using all of this information. A program called 3-D Slicer modified for astronomical use by the Astronomical Medicine Project at Harvard University in Cambridge, Mass. was used to display and manipulate the 3-D model. Commercial software was then used to create the 3-D fly-through. The blue filaments defining the blast wave were not mapped using the Doppler effect because they emit a different kind of light synchrotron radiation that does not emit light at discrete wavelengths, but rather in a broad continuum. The blue filaments are only a representation of the actual filaments observed at the blast wave. This visualization shows that there are two main components to this supernova remnant: a spherical component in the outer parts of the remnant and a flattened (disk-like) component in the inner region. The spherical component consists of the outer layer of the star that exploded, probably made of helium and carbon. These layers drove a spherical blast wave into the diffuse gas surrounding the star. The flattened component that astronomers were unable to map into 3-D prior to these Spitzer observations consists of the inner layers of the star. It is made from various heavier elements, not all shown in the visualization, such as oxygen, neon, silicon, sulphur, argon and iron. High-velocity plumes, or jets, of this material are shooting out from the explosion in the plane of the disk-like component mentioned above. Plumes of silicon appear in the northeast and southwest, while those of iron are seen in the southeast and north. These jets were already known and Doppler velocity measurements have been made for these structures, but their orientation and

  1. Natural fibre composites for 3D Printing

    OpenAIRE

    Pandey, Kapil

    2015-01-01

    3D printing has been common option for prototyping. Not all the materials are suitable for 3D printing. Various studies have been done and still many are ongoing regarding the suitability of the materials for 3D printing. This thesis work discloses the possibility of 3D printing of certain polymer composite materials. The main objective of this thesis work was to study the possibility for 3D printing the polymer composite material composed of natural fibre composite and various different ...

  2. Overview of fast algorithm in 3D dynamic holographic display

    Science.gov (United States)

    Liu, Juan; Jia, Jia; Pan, Yijie; Wang, Yongtian

    2013-08-01

    3D dynamic holographic display is one of the most attractive techniques for achieving real 3D vision with full depth cue without any extra devices. However, huge 3D information and data should be preceded and be computed in real time for generating the hologram in 3D dynamic holographic display, and it is a challenge even for the most advanced computer. Many fast algorithms are proposed for speeding the calculation and reducing the memory usage, such as:look-up table (LUT), compressed look-up table (C-LUT), split look-up table (S-LUT), and novel look-up table (N-LUT) based on the point-based method, and full analytical polygon-based methods, one-step polygon-based method based on the polygon-based method. In this presentation, we overview various fast algorithms based on the point-based method and the polygon-based method, and focus on the fast algorithm with low memory usage, the C-LUT, and one-step polygon-based method by the 2D Fourier analysis of the 3D affine transformation. The numerical simulations and the optical experiments are presented, and several other algorithms are compared. The results show that the C-LUT algorithm and the one-step polygon-based method are efficient methods for saving calculation time. It is believed that those methods could be used in the real-time 3D holographic display in future.

  3. 3-D discrete analytical ridgelet transform.

    Science.gov (United States)

    Helbert, David; Carré, Philippe; Andres, Eric

    2006-12-01

    In this paper, we propose an implementation of the 3-D Ridgelet transform: the 3-D discrete analytical Ridgelet transform (3-D DART). This transform uses the Fourier strategy for the computation of the associated 3-D discrete Radon transform. The innovative step is the definition of a discrete 3-D transform with the discrete analytical geometry theory by the construction of 3-D discrete analytical lines in the Fourier domain. We propose two types of 3-D discrete lines: 3-D discrete radial lines going through the origin defined from their orthogonal projections and 3-D planes covered with 2-D discrete line segments. These discrete analytical lines have a parameter called arithmetical thickness, allowing us to define a 3-D DART adapted to a specific application. Indeed, the 3-D DART representation is not orthogonal, It is associated with a flexible redundancy factor. The 3-D DART has a very simple forward/inverse algorithm that provides an exact reconstruction without any iterative method. In order to illustrate the potentiality of this new discrete transform, we apply the 3-D DART and its extension to the Local-DART (with smooth windowing) to the denoising of 3-D image and color video. These experimental results show that the simple thresholding of the 3-D DART coefficients is efficient.

  4. Automatic respiration tracking for radiotherapy using optical 3D camera

    Science.gov (United States)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Rapid optical three-dimensional (O3D) imaging systems provide accurate digitized 3D surface data in real-time, with no patient contact nor radiation. The accurate 3D surface images offer crucial information in image-guided radiation therapy (IGRT) treatments for accurate patient repositioning and respiration management. However, applications of O3D imaging techniques to image-guided radiotherapy have been clinically challenged by body deformation, pathological and anatomical variations among individual patients, extremely high dimensionality of the 3D surface data, and irregular respiration motion. In existing clinical radiation therapy (RT) procedures target displacements are caused by (1) inter-fractional anatomy changes due to weight, swell, food/water intake; (2) intra-fractional variations from anatomy changes within any treatment session due to voluntary/involuntary physiologic processes (e.g. respiration, muscle relaxation); (3) patient setup misalignment in daily reposition due to user errors; and (4) changes of marker or positioning device, etc. Presently, viable solution is lacking for in-vivo tracking of target motion and anatomy changes during the beam-on time without exposing patient with additional ionized radiation or high magnet field. Current O3D-guided radiotherapy systems relay on selected points or areas in the 3D surface to track surface motion. The configuration of the marks or areas may change with time that makes it inconsistent in quantifying and interpreting the respiration patterns. To meet the challenge of performing real-time respiration tracking using O3D imaging technology in IGRT, we propose a new approach to automatic respiration motion analysis based on linear dimensionality reduction technique based on PCA (principle component analysis). Optical 3D image sequence is decomposed with principle component analysis into a limited number of independent (orthogonal) motion patterns (a low dimension eigen-space span by eigen-vectors). New

  5. ORMGEN3D, 3-D Crack Geometry FEM Mesh Generator

    International Nuclear Information System (INIS)

    Bass, B.R.; Bryson, J.W.

    1994-01-01

    1 - Description of program or function: ORMGEN3D is a finite element mesh generator for computational fracture mechanics analysis. The program automatically generates a three-dimensional finite element model for six different crack geometries. These geometries include flat plates with straight or curved surface cracks and cylinders with part-through cracks on the outer or inner surface. Mathematical or user-defined crack shapes may be considered. The curved cracks may be semicircular, semi-elliptical, or user-defined. A cladding option is available that allows for either an embedded or penetrating crack in the clad material. 2 - Method of solution: In general, one eighth or one-quarter of the structure is modelled depending on the configuration or option selected. The program generates a core of special wedge or collapsed prism elements at the crack front to introduce the appropriate stress singularity at the crack tip. The remainder of the structure is modelled with conventional 20-node iso-parametric brick elements. Element group I of the finite element model consists of an inner core of special crack tip elements surrounding the crack front enclosed by a single layer of conventional brick elements. Eight element divisions are used in a plane orthogonal to the crack front, while the number of element divisions along the arc length of the crack front is user-specified. The remaining conventional brick elements of the model constitute element group II. 3 - Restrictions on the complexity of the problem: Maxima of 5,500 nodes, 4 layers of clad elements

  6. 3D reconstruction of cystoscopy videos for comprehensive bladder records

    OpenAIRE

    Lurie, Kristen L.; Angst, Roland; Zlatev, Dimitar V.; Liao, Joseph C.; Ellerbee Bowden, Audrey K.

    2017-01-01

    White light endoscopy is widely used for diagnostic imaging of the interior of organs and body cavities, but the inability to correlate individual 2D images with 3D organ morphology limits its utility for quantitative or longitudinal studies of disease physiology or cancer surveillance. As a result, most endoscopy videos, which carry enormous data potential, are used only for real-time guidance and are discarded after collection. We present a computational method to reconstruct and visualize ...

  7. Fundamentals of 3-D Neutron Kinetics and Current Status

    International Nuclear Information System (INIS)

    Aragones, J.M.

    2008-01-01

    This lecture includes the following topics: 1) A summary of the cell and lattice calculations used to generate the neutron reaction data for neutron kinetics, including the spectral and burnup calculations of LWR cells and fuel assembly lattices, and the main nodal kinetics parameters: mean neutron generation time and delayed neutron fraction; 2) the features of the advanced nodal methods for 3-D LWR core physics, including the treatment of partially inserted control rods, fuel assembly grids, fuel burnup and xenon and samarium transients, and excore detector responses, that are essential for core surveillance, axial offset control and operating transient analysis; 3) the advanced nodal methods for 3-D LWR core neutron kinetics (best estimate safety analysis, real-time simulation); and 4) example applications to 3-D neutron kinetics problems in transient analysis of PWR cores, including model, benchmark and operational transients without, or with simple, thermal-hydraulics feedback.

  8. Crowdsourcing Based 3d Modeling

    Science.gov (United States)

    Somogyi, A.; Barsi, A.; Molnar, B.; Lovas, T.

    2016-06-01

    Web-based photo albums that support organizing and viewing the users' images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  9. CROWDSOURCING BASED 3D MODELING

    Directory of Open Access Journals (Sweden)

    A. Somogyi

    2016-06-01

    Full Text Available Web-based photo albums that support organizing and viewing the users’ images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  10. Lossless compression for 3D PET

    International Nuclear Information System (INIS)

    Macq, B.; Sibomana, M.; Coppens, A.; Bol, A.; Michel, C.

    1994-01-01

    A new adaptive scheme is proposed for the lossless compression of positron emission tomography (PET) sinogram data. The algorithm uses an adaptive differential pulse code modulator (ADPCM) followed by a universal variable length coder (UVLC). Contrasting with Lempel-Ziv (LZ), which operates on a whole sinogram, UVLC operates very efficiently on short data blocks. This is a major advantage for real-time implementation. The algorithm is adaptive and codes data after some on-line estimations of the statistics inside each block. Its efficiency is tested when coding dynamic and static scans from two PET scanners and reaches asymptotically the entropy limit for long frames. For very short 3D frames, the new algorithm is twice more efficient than LZ. Since an ASIC implementing a similar UVLC scheme is available today, a similar one should be able to sustain PET data lossless compression and decompression at a rate of 27 MBytes/sec. This algorithm is consequently a good candidate for the next generation of lossless compression engine

  11. Lossless compression for 3D PET

    International Nuclear Information System (INIS)

    Macq, B.; Sibomana, M.; Coppens, A.; Bol, A.; Michel, C.; Baker, K.; Jones, B.

    1994-01-01

    A new adaptive scheme is proposed for the lossless compression of positron emission tomography (PET) sinogram data. The algorithm uses an adaptive differential pulse code modulator (ADPCM) followed by a universal variable length coder (UVLC). Contrasting with Lempel-Ziv (LZ), which operates on a whole sinogram, UVLC operates very efficiently on short data blocks. This is a major advantage for real-time implementation. The algorithms is adaptive and codes data after some on-line estimations of the statistics inside each block. Its efficiency is tested when coding dynamic and static scans from two PET scanners and reaches asymptotically the entropy limit for long frames. For very short 3D frames, the new algorithm is twice more efficient than LZ. Since an application specific integrated circuit (ASIC) implementing a similar UVLC scheme is available today, a similar one should be able to sustain PET data lossless compression and decompression at a rate of 27 MBytes/sec. This algorithm is consequently a good candidate for the next generation of lossless compression engine

  12. Enhancing Nuclear Training with 3D Visualization

    International Nuclear Information System (INIS)

    Gagnon, V.; Gagnon, B.

    2016-01-01

    Full text: While the nuclear power industry is trying to reinforce its safety and regain public support post-Fukushima, it is also faced with a very real challenge that affects its day-to-day activities: a rapidly aging workforce. Statistics show that close to 40% of the current nuclear power industry workforce will retire within the next five years. For newcomer countries, the challenge is even greater, having to develop a completely new workforce. The workforce replacement effort introduces nuclear newcomers of a new generation with different backgrounds and affinities. Major lifestyle differences between the two generations of workers result, amongst other things, in different learning habits and needs for this new breed of learners. Interactivity, high visual content and quick access to information are now necessary to achieve a high level of retention. To enhance existing training programmes or to support the establishment of new training programmes for newcomer countries, L-3 MAPPS has devised learning tools to enhance these training programmes focused on the “Practice-by-Doing” principle. L-3 MAPPS has coupled 3D computer visualization with high-fidelity simulation to bring real-time, simulation-driven animated components and systems allowing immersive and participatory, individual or classroom learning. (author

  13. 3-D conformal radiation therapy - Part II: Computer-controlled 3-D treatment delivery

    International Nuclear Information System (INIS)

    Benedick, A.

    1997-01-01

    -controlled scanned beam treatments will also be discussed. CCRT-related approaches to treatment plan generation and transfer, accelerator control systems, treatment delivery, verification, documentation and charting will also be discussed, including the importance of real-time portal imaging for conformal therapy. The potential benefits of 3-D computer-controlled conformal treatment delivery will be illustrated with results from on-going clinical dose escalation and normal tissue complication studies. Conclusion: A large amount of interest in computer-controlled conformal treatment delivery techniques has developed in recent years. This presentation will attempt to summarize the current status of clinical and research work in 3-D computer-controlled conformal therapy treatment techniques. Particular attention is paid to issues related to implementation and clinical use of this developing treatment modality

  14. Role of modern 3D echocardiography in valvular heart disease

    Science.gov (United States)

    2014-01-01

    Three-dimensional (3D) echocardiography has been conceived as one of the most promising methods for the diagnosis of valvular heart disease, and recently has become an integral clinical tool thanks to the development of high quality real-time transesophageal echocardiography (TEE). In particular, for mitral valve diseases, this new approach has proven to be the most unique, powerful, and convincing method for understanding the complicated anatomy of the mitral valve and its dynamism. The method has been useful for surgical management, including robotic mitral valve repair. Moreover, this method has become indispensable for nonsurgical mitral procedures such as edge to edge mitral repair and transcatheter closure of paravaluvular leaks. In addition, color Doppler 3D echo has been valuable to identify the location of the regurgitant orifice and the severity of the mitral regurgitation. For aortic and tricuspid valve diseases, this method may not be quite as valuable as for the mitral valve. However, the necessity of 3D echo is recognized for certain situations even for these valves, such as for evaluating the aortic annulus for transcatheter aortic valve implantation. It is now clear that this method, especially with the continued development of real-time 3D TEE technology, will enhance the diagnosis and management of patients with these valvular heart diseases. PMID:25378966

  15. Modreg: A Modular Framework for RGB-D Image Acquisition and 3D Object Model Registration

    Directory of Open Access Journals (Sweden)

    Kornuta Tomasz

    2017-09-01

    Full Text Available RGB-D sensors became a standard in robotic applications requiring object recognition, such as object grasping and manipulation. A typical object recognition system relies on matching of features extracted from RGB-D images retrieved from the robot sensors with the features of the object models. In this paper we present ModReg: a system for registration of 3D models of objects. The system consists of a modular software associated with a multi-camera setup supplemented with an additional pattern projector, used for the registration of high-resolution RGB-D images. The objects are placed on a fiducial board with two dot patterns enabling extraction of masks of the placed objects and estimation of their initial poses. The acquired dense point clouds constituting subsequent object views undergo pairwise registration and at the end are optimized with a graph-based technique derived from SLAM. The combination of all those elements resulted in a system able to generate consistent 3D models of objects.

  16. Vrste i tehnike 3D modeliranja

    OpenAIRE

    Bernik, Andrija

    2010-01-01

    Proces stvaranja 3D stvarnih ili imaginarnih objekata naziva se 3D modeliranje. Razvoj računalne tehnologije omogućuje korisniku odabir raznih metoda i tehnika kako bi se postigla optimalna učinkovitost. Odabir je vezan za klasično 3D modeliranje ili 3D skeniranje pomoću specijaliziranih programskih i sklopovskih rješenja. 3D tehnikama modeliranja korisnik može izraditi 3D model na nekoliko načina: koristi poligone, krivulje ili hibrid dviju spomenutih tehnika pod nazivom subdivizijsko modeli...

  17. Kuvaus 3D-tulostamisesta hammastekniikassa

    OpenAIRE

    Munne, Mauri; Mustonen, Tuomas; Vähäjylkkä, Jaakko

    2013-01-01

    3D-tulostaminen kehittyy nopeasti ja yleistyy koko ajan. Tulostimien tarkkuuksien kehittyessä 3D-tulostus on ottamassa myös jalansijaa hammastekniikan alalta. Tämän opinnäytetyön tarkoituksena on kuvata 3D-tulostamisen tilaa hammastekniikassa. 3D-tulostaminen on Suomessa vielä melko harvinaista, joten opinnäytetyön tavoitteena on koota yhteen kaikki mahdollinen tieto liittyen 3D-tulostamiseen hammastekniikassa. Tavoitteena on myös 3D-tulostimen testaaminen käytännössä aina suun skannaami...

  18. NIF Ignition Target 3D Point Design

    Energy Technology Data Exchange (ETDEWEB)

    Jones, O; Marinak, M; Milovich, J; Callahan, D

    2008-11-05

    We have developed an input file for running 3D NIF hohlraums that is optimized such that it can be run in 1-2 days on parallel computers. We have incorporated increasing levels of automation into the 3D input file: (1) Configuration controlled input files; (2) Common file for 2D and 3D, different types of capsules (symcap, etc.); and (3) Can obtain target dimensions, laser pulse, and diagnostics settings automatically from NIF Campaign Management Tool. Using 3D Hydra calculations to investigate different problems: (1) Intrinsic 3D asymmetry; (2) Tolerance to nonideal 3D effects (e.g. laser power balance, pointing errors); and (3) Synthetic diagnostics.

  19. Towards 3C-3D digital holographic fluid velocity vector field measurement—tomographic digital holographic PIV (Tomo-HPIV)

    International Nuclear Information System (INIS)

    Soria, J; Atkinson, C

    2008-01-01

    Most unsteady and/or turbulent flows of geophysical and engineering interest have a highly three-dimensional (3D) complex topology and their experimental investigation is in pressing need of quantitative velocity measurement methods that are robust and can provide instantaneous 3C-3D velocity field data over a significant volumetric domain of the flow. This paper introduces and demonstrates a new method that uses multiple digital CCD array cameras to record in-line digital holograms of the same volume of seed particles from multiple orientations. This technique uses the same basic equipment as Tomo-PIV minus the camera lenses, it overcomes the depth-of-field problem of digital in-line holography and does not require the complex optical calibration of Tomo-PIV. The digital sensors can be oriented in an optimal manner to overcome the depth-of-field limitation of in-line holograms recorded using digital CCD or CMOS array cameras, resulting in a 3D reconstruction of the seed particles within the volume of interest, which can subsequently be analysed using 3D cross-correlation PIV analysis to yield a 3C-3D velocity field. A demonstration experiment of Tomo-HPIV using uniform translation with nominally 11 µm diameter seed particles shows that the 3D displacement derived from 3D cross-correlation Tomo-HPIV analysis can be measured within 5% of the imposed uniform translation, where the imposed uniform translation has an estimated standard uncertainty of 4.3%. So this paper proposes a multi-camera digital holographic imaging 3C-3D PIV method, which is identified as tomographic digital holographic PIV or Tomo-HPIV

  20. Algorithms for Fast Computing of the 3D-DCT Transform

    Directory of Open Access Journals (Sweden)

    S. Hanus

    2003-04-01

    Full Text Available The algorithm for video compression based on the Three-DimensionalDiscrete Cosine Transform (3D-DCT is presented. The original algorithmof the 3D-DCT has high time complexity. We propose several enhancementsto the original algorithm and make the calculation of the DCT algorithmfeasible for future real-time video compression.

  1. Low Complexity Connectivity Driven Dynamic Geometry Compression for 3D Tele-Immersion

    NARCIS (Netherlands)

    R.N. Mekuria (Rufael); D.C.A. Bulterman (Dick); P.S. Cesar Garcia (Pablo Santiago)

    2014-01-01

    htmlabstractGeometry based 3D Tele-Immersion is a novel emerging media application that involves on the fly reconstructed 3D mesh geometry. To enable real-time communication of such live reconstructed mesh geometry over a bandwidth limited link, fast dynamic geometry compression is needed. However,

  2. Segmentation of multiple heart cavities in 3-D transesophageal ultrasound images

    NARCIS (Netherlands)

    Haak, A.; Vegas-Sanchez-Ferrero, G.; Mulder, H.W.; Ren, B.; Kirisli, H.A.; Metz, C.; van Burken, G.; van Stralen, M.; Pluim, J.P.W.; Steen, van der A.F.W.; Walsum, van T.; Bosch, J.G.

    Three-dimensional transesophageal echocardiography (TEE) is an excellent modality for real-time visualization of the heart and monitoring of interventions. To improve the usability of 3-D TEE for intervention monitoring and catheter guidance, automated segmentation is desired. However, 3-D TEE

  3. Magma emplacement in 3D

    Science.gov (United States)

    Gorczyk, W.; Vogt, K.

    2017-12-01

    Magma intrusion is a major material transfer process in Earth's continental crust. Yet, the mechanical behavior of the intruding magma and its host are a matter of debate. In this study, we present a series of numerical thermo-mechanical experiments on mafic magma emplacement in 3D.In our model, we place the magmatic source region (40 km diameter) at the base of the mantle lithosphere and connect it to the crust by a 3 km wide channel, which may have evolved at early stages of magmatism during rapid ascent of hot magmatic fluids/melts. Our results demonstrate continental crustal response due to magma intrusion. We observe change in intrusion geometries between dikes, cone-sheets, sills, plutons, ponds, funnels, finger-shaped and stock-like intrusions as well as injection time. The rheology and temperature of the host-rock are the main controlling factors in the transition between these different modes of intrusion. Viscous deformation in the warm and deep crust favours host rock displacement and magma pools along the crust-mantle boundary forming deep-seated plutons or magma ponds in the lower to middle-crust. Brittle deformation in the cool and shallow crust induces cone-shaped fractures in the host rock and enables emplacement of finger- or stock-like intrusions at shallow or intermediate depth. A combination of viscous and brittle deformation forms funnel-shaped intrusions in the middle-crust. Low-density source magma results in T-shaped intrusions in cross-section with magma sheets at the surface.

  4. Will 3D printers manufacture your meals?

    NARCIS (Netherlands)

    Bommel, K.J.C. van

    2013-01-01

    These days, 3D printers are laying down plastics, metals, resins, and other materials in whatever configurations creative people can dream up. But when the next 3D printing revolution comes, you'll be able to eat it.

  5. Eesti 3D jaoks kitsas / Virge Haavasalu

    Index Scriptorium Estoniae

    Haavasalu, Virge

    2009-01-01

    Produktsioonifirma Digitaalne Sputnik: Kaur ja Kaspar Kallas tegelevad filmide produtseerimise ning 3D digitaalkaamerate tootearendusega (Silicon Imaging LLC). Vendade Kallaste 3D-kaamerast. Kommenteerib Eesti Filmi Sihtasutuse direktor Marge Liiske

  6. Network Support for Social 3-D Immersive Tele-Presence with Highly Realistic Natural and Synthetic Avatar Users

    NARCIS (Netherlands)

    R.N. Mekuria (Rufael); A. Frisiello (Antonella); M Pasin (Marco); P.S. Cesar Garcia (Pablo Santiago)

    2015-01-01

    htmlabstractThe next generation in 3D tele-presence is based on modular systems that combine live captured object based 3D video and synthetically authored 3D graphics content. This paper presents the design, implementation and evaluation of a network solution for multi-party real-time communication

  7. VirtoScan - a mobile, low-cost photogrammetry setup for fast post-mortem 3D full-body documentations in x-ray computed tomography and autopsy suites.

    Science.gov (United States)

    Kottner, Sören; Ebert, Lars C; Ampanozi, Garyfalia; Braun, Marcel; Thali, Michael J; Gascho, Dominic

    2017-03-01

    Injuries such as bite marks or boot prints can leave distinct patterns on the body's surface and can be used for 3D reconstructions. Although various systems for 3D surface imaging have been introduced in the forensic field, most techniques are both cost-intensive and time-consuming. In this article, we present the VirtoScan, a mobile, multi-camera rig based on close-range photogrammetry. The system can be integrated into automated PMCT scanning procedures or used manually together with lifting carts, autopsy tables and examination couch. The VirtoScan is based on a moveable frame that carries 7 digital single-lens reflex cameras. A remote control is attached to each camera and allows the simultaneous triggering of the shutter release of all cameras. Data acquisition in combination with the PMCT scanning procedures took 3:34 min for the 3D surface documentation of one side of the body compared to 20:20 min of acquisition time when using our in-house standard. A surface model comparison between the high resolution output from our in-house standard and a high resolution model from the multi-camera rig showed a mean surface deviation of 0.36 mm for the whole body scan and 0.13 mm for a second comparison of a detailed section of the scan. The use of the multi-camera rig reduces the acquisition time for whole-body surface documentations in medico-legal examinations and provides a low-cost 3D surface scanning alternative for forensic investigations.

  8. 3D-Printed Millimeter Wave Structures

    Science.gov (United States)

    2016-03-14

    demonstrates the resolution of the printer with a 10 micron nozzle. Figure 2: Measured loss tangent of SEBS and SBS samples. 3D - Printed Millimeter... 3D printing of styrene-butadiene-styrene (SBS) and styrene ethylene/butylene-styrene (SEBS) is used to demonstrate the feasibility of 3D - printed ...Additionally, a dielectric lens is printed which improves the antenna gain of an open-ended WR-28 waveguide from 7 to 8.5 dBi. Keywords: 3D printing

  9. Digital Dentistry — 3D Printing Applications

    OpenAIRE

    Zaharia Cristian; Gabor Alin-Gabriel; Gavrilovici Andrei; Stan Adrian Tudor; Idorasi Laura; Sinescu Cosmin; Negruțiu Meda-Lavinia

    2017-01-01

    Three-dimensional (3D) printing is an additive manufacturing method in which a 3D item is formed by laying down successive layers of material. 3D printers are machines that produce representations of objects either planned with a CAD program or scanned with a 3D scanner. Printing is a method for replicating text and pictures, typically with ink on paper. We can print different dental pieces using different methods such as selective laser sintering (SLS), stereolithography, fused deposition mo...

  10. Detectors in 3D available for assessment

    CERN Document Server

    Re, Valerio

    2014-01-01

    This deliverable reports on 3D devices resulting from the vertical integration of pixel sensors and readout electronics. After 3D integration steps such as etching of through-silicon vias and backside metallization of readout integrated circuits, ASICs and sensors are interconnected to form a 3D pixel detector. Various 3D detectors have been devised in AIDA WP3 and their status and performance is assessed here.

  11. 3D modelling for multipurpose cadastre

    NARCIS (Netherlands)

    Abduhl Rahman, A.; Van Oosterom, P.J.M.; Hua, T.C.; Sharkawi, K.H.; Duncan, E.E.; Azri, N.; Hassan, M.I.

    2012-01-01

    Three-dimensional (3D) modelling of cadastral objects (such as legal spaces around buildings, around utility networks and other spaces) is one of the important aspects for a multipurpose cadastre (MPC). This paper describes the 3D modelling of the objects for MPC and its usage to the knowledge of 3D

  12. Expanding Geometry Understanding with 3D Printing

    Science.gov (United States)

    Cochran, Jill A.; Cochran, Zane; Laney, Kendra; Dean, Mandi

    2016-01-01

    With the rise of personal desktop 3D printing, a wide spectrum of educational opportunities has become available for educators to leverage this technology in their classrooms. Until recently, the ability to create physical 3D models was well beyond the scope, skill, and budget of many schools. However, since desktop 3D printers have become readily…

  13. 3D Characterization of Recrystallization Boundaries

    DEFF Research Database (Denmark)

    Zhang, Yubin; Godfrey, Andrew William; MacDonald, A. Nicole

    2016-01-01

    A three-dimensional (3D) volume containing a recrystallizing grain and a deformed matrix in a partially recrystallized pure aluminum was characterized using the 3D electron backscattering diffraction technique. The 3D shape of a recrystallizing boundary, separating the recrystallizing grain...... on the formation of protrusions/retrusions....

  14. 3D-Printable Antimicrobial Composite Resins

    NARCIS (Netherlands)

    Yue, Jun; Zhao, Pei; Gerasimov, Jennifer Y.; van de Lagemaat, Marieke; Grotenhuis, Arjen; Rustema-Abbing, Minie; van der Mei, Henny C.; Busscher, Henk J.; Herrmann, Andreas; Ren, Yijin

    2015-01-01

    3D printing is seen as a game-changing manufacturing process in many domains, including general medicine and dentistry, but the integration of more complex functions into 3D-printed materials remains lacking. Here, it is expanded on the repertoire of 3D-printable materials to include antimicrobial

  15. 3D Mapping for Urban and Regional Planning

    DEFF Research Database (Denmark)

    Bodum, Lars

    2002-01-01

    The process of mapping in 3D for urban and regional planning purposes is not an uncomplicated matter. It involves both the construction of a new data-model and new routines for the geometric modeling of the physical objects. This is due to the fact that most of the documentation until now has been...... registered and georeferenced to the 2D plan. This paper will outline a new method for 3D mapping where new LIDAR (laser-scanning) technology and additional 2D maps with attributes will be combined to create a 3D map of an urban area. The 3D map will afterwards be used in a real-time simulation system (also...... known as Virtual Reality system) for urban and regional planning purposes. This initiative will be implemented in a specific geographic region (North Jutland County in Denmark) by a new research centre at Aalborg University called Centre for 3D GeoInformation. The key question for this research team...

  16. VPython: Python plus Animations in Stereo 3D

    Science.gov (United States)

    Sherwood, Bruce

    2004-03-01

    Python is a modern object-oriented programming language. VPython (http://vpython.org) is a combination of Python (http://python.org), the Numeric module from LLNL (http://www.pfdubois.com/numpy), and the Visual module created by David Scherer, all of which have been under continuous development as open source projects. VPython makes it easy to write programs that generate real-time, navigable 3D animations. The Visual module includes a set of 3D objects (sphere, cylinder, arrow, etc.), tools for creating other shapes, and support for vector algebra. The 3D renderer runs in a parallel thread, and animations are produced as a side effect of computations, freeing the programmer to concentrate on the physics. Applications include educational and research visualization. In the Fall of 2003 Hugh Fisher at the Australian National University, John Zelle at Wartburg College, and I contributed to a new stereo capability of VPython. By adding a single statement to an existing VPython program, animations can be viewed in true stereo 3D. One can choose several modes: active shutter glasses, passive polarized glasses, or colored glasses (e.g. red-cyan). The talk will demonstrate the new stereo capability and discuss the pros and cons of various schemes for display of stereo 3D for a large audience. Supported in part by NSF grant DUE-0237132.

  17. Ray-based approach to integrated 3D visual communication

    Science.gov (United States)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  18. Advanced 3D Sensing and Visualization System for Unattended Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Carlson, J.J.; Little, C.Q.; Nelson, C.L.

    1999-01-01

    The purpose of this project was to create a reliable, 3D sensing and visualization system for unattended monitoring. The system provides benefits for several of Sandia's initiatives including nonproliferation, treaty verification, national security and critical infrastructure surety. The robust qualities of the system make it suitable for both interior and exterior monitoring applications. The 3D sensing system combines two existing sensor technologies in a new way to continuously maintain accurate 3D models of both static and dynamic components of monitored areas (e.g., portions of buildings, roads, and secured perimeters in addition to real-time estimates of the shape, location, and motion of humans and moving objects). A key strength of this system is the ability to monitor simultaneous activities on a continuous basis, such as several humans working independently within a controlled workspace, while also detecting unauthorized entry into the workspace. Data from the sensing system is used to identi~ activities or conditions that can signi~ potential surety (safety, security, and reliability) threats. The system could alert a security operator of potential threats or could be used to cue other detection, inspection or warning systems. An interactive, Web-based, 3D visualization capability was also developed using the Virtual Reality Modeling Language (VRML). The intex%ace allows remote, interactive inspection of a monitored area (via the Internet or Satellite Links) using a 3D computer model of the area that is rendered from actual sensor data.

  19. Light-driven micro-robotics with holographic 3D tracking

    DEFF Research Database (Denmark)

    Glückstad, Jesper

    2016-01-01

    We recently pioneered the concept of ligh-driven micro-robotics including the new and disruptive 3D-printed micro-tools coined Wave-guided Optical Waveguides that can be real-time optically trapped and “remote-controlled” in a volume with six-degrees-of-freedom. To be exploring the full potential...... of this new drone-like 3D light robotics approach in challenging microscopic geometries requires a versatile and real-time reconfigurable light coupling that can dynamically track a plurality of “light robots” in 3D to ensure continuous optimal light coupling on the fly. Our latest developments in this new...

  20. View-based 3-D object retrieval

    CERN Document Server

    Gao, Yue

    2014-01-01

    Content-based 3-D object retrieval has attracted extensive attention recently and has applications in a variety of fields, such as, computer-aided design, tele-medicine,mobile multimedia, virtual reality, and entertainment. The development of efficient and effective content-based 3-D object retrieval techniques has enabled the use of fast 3-D reconstruction and model design. Recent technical progress, such as the development of camera technologies, has made it possible to capture the views of 3-D objects. As a result, view-based 3-D object retrieval has become an essential but challenging res

  1. Wafer level 3-D ICs process technology

    CERN Document Server

    Tan, Chuan Seng; Reif, L Rafael

    2009-01-01

    This book focuses on foundry-based process technology that enables the fabrication of 3-D ICs. The core of the book discusses the technology platform for pre-packaging wafer lever 3-D ICs. However, this book does not include a detailed discussion of 3-D ICs design and 3-D packaging. This is an edited book based on chapters contributed by various experts in the field of wafer-level 3-D ICs process technology. They are from academia, research labs and industry.

  2. 3D Printing of Fluid Flow Structures

    OpenAIRE

    Taira, Kunihiko; Sun, Yiyang; Canuto, Daniel

    2017-01-01

    We discuss the use of 3D printing to physically visualize (materialize) fluid flow structures. Such 3D models can serve as a refreshing hands-on means to gain deeper physical insights into the formation of complex coherent structures in fluid flows. In this short paper, we present a general procedure for taking 3D flow field data and producing a file format that can be supplied to a 3D printer, with two examples of 3D printed flow structures. A sample code to perform this process is also prov...

  3. The Esri 3D city information model

    International Nuclear Information System (INIS)

    Reitz, T; Schubiger-Banz, S

    2014-01-01

    With residential and commercial space becoming increasingly scarce, cities are going vertical. Managing the urban environments in 3D is an increasingly important and complex undertaking. To help solving this problem, Esri has released the ArcGIS for 3D Cities solution. The ArcGIS for 3D Cities solution provides the information model, tools and apps for creating, analyzing and maintaining a 3D city using the ArcGIS platform. This paper presents an overview of the 3D City Information Model and some sample use cases

  4. Quantitative 3-D imaging topogrammetry for telemedicine applications

    Science.gov (United States)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  5. Case study: Beauty and the Beast 3D: benefits of 3D viewing for 2D to 3D conversion

    Science.gov (United States)

    Handy Turner, Tara

    2010-02-01

    From the earliest stages of the Beauty and the Beast 3D conversion project, the advantages of accurate desk-side 3D viewing was evident. While designing and testing the 2D to 3D conversion process, the engineering team at Walt Disney Animation Studios proposed a 3D viewing configuration that not only allowed artists to "compose" stereoscopic 3D but also improved efficiency by allowing artists to instantly detect which image features were essential to the stereoscopic appeal of a shot and which features had minimal or even negative impact. At a time when few commercial 3D monitors were available and few software packages provided 3D desk-side output, the team designed their own prototype devices and collaborated with vendors to create a "3D composing" workstation. This paper outlines the display technologies explored, final choices made for Beauty and the Beast 3D, wish-lists for future development and a few rules of thumb for composing compelling 2D to 3D conversions.

  6. RELAP5-3D User Problems

    Energy Technology Data Exchange (ETDEWEB)

    Riemke, Richard Allan

    2002-09-01

    The Reactor Excursion and Leak Analysis Program with 3D capability1 (RELAP5-3D) is a reactor system analysis code that has been developed at the Idaho National Engineering and Environmental Laboratory (INEEL) for the U. S. Department of Energy (DOE). The 3D capability in RELAP5-3D includes 3D hydrodynamics2 and 3D neutron kinetics3,4. Assessment, verification, and validation of the 3D capability in RELAP5-3D is discussed in the literature5,6,7,8,9,10. Additional assessment, verification, and validation of the 3D capability of RELAP5-3D will be presented in other papers in this users seminar. As with any software, user problems occur. User problems usually fall into the categories of input processing failure, code execution failure, restart/renodalization failure, unphysical result, and installation. This presentation will discuss some of the more generic user problems that have been reported on RELAP5-3D as well as their resolution.

  7. 3D vision system for intelligent milking robot automation

    Science.gov (United States)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  8. 3DSEM: A 3D microscopy dataset

    Directory of Open Access Journals (Sweden)

    Ahmad P. Tafti

    2016-03-01

    Full Text Available The Scanning Electron Microscope (SEM as a 2D imaging instrument has been widely used in many scientific disciplines including biological, mechanical, and materials sciences to determine the surface attributes of microscopic objects. However the SEM micrographs still remain 2D images. To effectively measure and visualize the surface properties, we need to truly restore the 3D shape model from 2D SEM images. Having 3D surfaces would provide anatomic shape of micro-samples which allows for quantitative measurements and informative visualization of the specimens being investigated. The 3DSEM is a dataset for 3D microscopy vision which is freely available at [1] for any academic, educational, and research purposes. The dataset includes both 2D images and 3D reconstructed surfaces of several real microscopic samples. Keywords: 3D microscopy dataset, 3D microscopy vision, 3D SEM surface reconstruction, Scanning Electron Microscope (SEM

  9. Multiview 3D sensing and analysis for high quality point cloud reconstruction

    Science.gov (United States)

    Satnik, Andrej; Izquierdo, Ebroul; Orjesek, Richard

    2018-04-01

    Multiview 3D reconstruction techniques enable digital reconstruction of 3D objects from the real world by fusing different viewpoints of the same object into a single 3D representation. This process is by no means trivial and the acquisition of high quality point cloud representations of dynamic 3D objects is still an open problem. In this paper, an approach for high fidelity 3D point cloud generation using low cost 3D sensing hardware is presented. The proposed approach runs in an efficient low-cost hardware setting based on several Kinect v2 scanners connected to a single PC. It performs autocalibration and runs in real-time exploiting an efficient composition of several filtering methods including Radius Outlier Removal (ROR), Weighted Median filter (WM) and Weighted Inter-Frame Average filtering (WIFA). The performance of the proposed method has been demonstrated through efficient acquisition of dense 3D point clouds of moving objects.

  10. CAD-based intelligent robot system integrated with 3D scanning for shoe roughing and cementing

    Directory of Open Access Journals (Sweden)

    Chiu Cheng-Chang

    2017-01-01

    Full Text Available Roughing and cementing are very essential to the process of bonding shoe uppers and the corresponding soles; however, for shoes with complicated design, such as sport shoes, roughing and cementing greatly relied on manual operation. Recently, shoe industry is progressing to 3D design, thus 3D model of the shoe upper and sole will be created before launching into mass production. Taking advantage of the 3D model, this study developed a plug-in program on Rhino 3D CAD platform, which realized the complicated roughing and cementing route planning to be performed by the plug-in program, integrated with real-time 3D scanning information to compensate the planned route, and then converted to working trajectory of robot arm to implement roughing and cementing. The proposed 3D CAD-based intelligent robot arm system integrated with 3D scanning for shoe roughing and cementing is realized and proved to be feasible.

  11. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  12. Research on 3D power distribution of PWR reactor core based on RBF neural network

    International Nuclear Information System (INIS)

    Xia Hong; Li Bin; Liu Jianxin

    2014-01-01

    Real-time monitor for 3D power distribution is critical to nuclear safety and high efficiency of NPP's operation as well as the control system optimization. A method was proposed to set up a real-time monitor system for 3D power distribution by using of ex-core neutron detecting system and RBF neural network for improving the instantaneity of the monitoring results and reducing the fitting error of the 3D power distribution. A series of experiments were operated on a 300 MW PWR simulation system. The results demonstrate that the new monitor system works very well under condition of certain burnup range during the fuel cycle and reconstructs the real-time 3D distribution of reactor core power. The accuracy of the model is improved effectively with the help of several methods. (authors)

  13. 3D Structure of Tillage Soils

    Science.gov (United States)

    González-Torre, Iván; Losada, Juan Carlos; Falconer, Ruth; Hapca, Simona; Tarquis, Ana M.

    2015-04-01

    Soil structure may be defined as the spatial arrangement of soil particles, aggregates and pores. The geometry of each one of these elements, as well as their spatial arrangement, has a great influence on the transport of fluids and solutes through the soil. Fractal/Multifractal methods have been increasingly applied to quantify soil structure thanks to the advances in computer technology (Tarquis et al., 2003). There is no doubt that computed tomography (CT) has provided an alternative for observing intact soil structure. These CT techniques reduce the physical impact to sampling, providing three-dimensional (3D) information and allowing rapid scanning to study sample dynamics in near real-time (Houston et al., 2013a). However, several authors have dedicated attention to the appropriate pore-solid CT threshold (Elliot and Heck, 2007; Houston et al., 2013b) and the better method to estimate the multifractal parameters (Grau et al., 2006; Tarquis et al., 2009). The aim of the present study is to evaluate the effect of the algorithm applied in the multifractal method (box counting and box gliding) and the cube size on the calculation of generalized fractal dimensions (Dq) in grey images without applying any threshold. To this end, soil samples were extracted from different areas plowed with three tools (moldboard, chissel and plow). Soil samples for each of the tillage treatment were packed into polypropylene cylinders of 8 cm diameter and 10 cm high. These were imaged using an mSIMCT at 155keV and 25 mA. An aluminium filter (0.25 mm) was applied to reduce beam hardening and later several corrections where applied during reconstruction. References Elliot, T.R. and Heck, R.J. 2007. A comparison of 2D and 3D thresholding of CT imagery. Can. J. Soil Sci., 87(4), 405-412. Grau, J, Médez, V.; Tarquis, A.M., Saa, A. and Díaz, M.C.. 2006. Comparison of gliding box and box-counting methods in soil image analysis. Geoderma, 134, 349-359. González-Torres, Iván. Theory and

  14. GPU-accelerated 3-D model-based tracking

    International Nuclear Information System (INIS)

    Brown, J Anthony; Capson, David W

    2010-01-01

    Model-based approaches to tracking the pose of a 3-D object in video are effective but computationally demanding. While statistical estimation techniques, such as the particle filter, are often employed to minimize the search space, real-time performance remains unachievable on current generation CPUs. Recent advances in graphics processing units (GPUs) have brought massively parallel computational power to the desktop environment and powerful developer tools, such as NVIDIA Compute Unified Device Architecture (CUDA), have provided programmers with a mechanism to exploit it. NVIDIA GPUs' single-instruction multiple-thread (SIMT) programming model is well-suited to many computer vision tasks, particularly model-based tracking, which requires several hundred 3-D model poses to be dynamically configured, rendered, and evaluated against each frame in the video sequence. Using 6 degree-of-freedom (DOF) rigid hand tracking as an example application, this work harnesses consumer-grade GPUs to achieve real-time, 3-D model-based, markerless object tracking in monocular video.

  15. 3-D OBJECT RECOGNITION FROM POINT CLOUD DATA

    Directory of Open Access Journals (Sweden)

    W. Smith

    2012-09-01

    Full Text Available The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs. Massively parallel processes such as graphics processing unit (GPU computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM and digital elevation model (DEM, so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex

  16. 3-D Object Recognition from Point Cloud Data

    Science.gov (United States)

    Smith, W.; Walker, A. S.; Zhang, B.

    2011-09-01

    The market for real-time 3-D mapping includes not only traditional geospatial applications but also navigation of unmanned autonomous vehicles (UAVs). Massively parallel processes such as graphics processing unit (GPU) computing make real-time 3-D object recognition and mapping achievable. Geospatial technologies such as digital photogrammetry and GIS offer advanced capabilities to produce 2-D and 3-D static maps using UAV data. The goal is to develop real-time UAV navigation through increased automation. It is challenging for a computer to identify a 3-D object such as a car, a tree or a house, yet automatic 3-D object recognition is essential to increasing the productivity of geospatial data such as 3-D city site models. In the past three decades, researchers have used radiometric properties to identify objects in digital imagery with limited success, because these properties vary considerably from image to image. Consequently, our team has developed software that recognizes certain types of 3-D objects within 3-D point clouds. Although our software is developed for modeling, simulation and visualization, it has the potential to be valuable in robotics and UAV applications. The locations and shapes of 3-D objects such as buildings and trees are easily recognizable by a human from a brief glance at a representation of a point cloud such as terrain-shaded relief. The algorithms to extract these objects have been developed and require only the point cloud and minimal human inputs such as a set of limits on building size and a request to turn on a squaring option. The algorithms use both digital surface model (DSM) and digital elevation model (DEM), so software has also been developed to derive the latter from the former. The process continues through the following steps: identify and group 3-D object points into regions; separate buildings and houses from trees; trace region boundaries; regularize and simplify boundary polygons; construct complex roofs. Several case

  17. Identification of the transition arrays 3d74s-3d74p in Br X and 3d64s-3d64p in Br XI

    International Nuclear Information System (INIS)

    Zeng, X.T.; Jupen, C.; Bengtsson, P.; Engstroem, L.; Westerlind, M.; Martinson, I.

    1991-01-01

    We report a beam-foil study of multiply ionized bromine in the region 400-1300A, performed with 6 and 8 MeV Br ions from a tandem accelerator. At these energies transitions belonging to Fe-like Br X and Mn-like Br XI are expected to be prominent. We have identified 31 lines as 3d 7 4s-3d 7 4p transitions in Br X, from which 16 levels of the previously unknown 3d 7 4s configuration could be established. We have also added 6 new 3d 7 4p levels to the 99 previously known. For Br XI we have classified 9 lines as 3d 6 4s-3d 6 4p combinations. The line identifications have been corroborated by isoelectronic comparisons and theoretical calculations using the superposition-of-configurations technique. (orig.)

  18. 3D PHOTOGRAPHS IN CULTURAL HERITAGE

    Directory of Open Access Journals (Sweden)

    W. Schuhr

    2013-07-01

    Full Text Available This paper on providing "oo-information" (= objective object-information on cultural monuments and sites, based on 3D photographs is also a contribution of CIPA task group 3 to the 2013 CIPA Symposium in Strasbourg. To stimulate the interest in 3D photography for scientists as well as for amateurs, 3D-Masterpieces are presented. Exemplary it is shown, due to their high documentary value ("near reality", 3D photography support, e.g. the recording, the visualization, the interpretation, the preservation and the restoration of architectural and archaeological objects. This also includes samples for excavation documentation, 3D coordinate calculation, 3D photographs applied for virtual museum purposes and as educational tools. In addition 3D photography is used for virtual museum purposes, as well as an educational tool and for spatial structure enhancement, which in particular holds for inscriptions and in rock arts. This paper is also an invitation to participate in a systematic survey on existing international archives of 3D photographs. In this respect it is also reported on first results, to define an optimum digitization rate for analog stereo views. It is more than overdue, in addition to the access to international archives for 3D photography, the available 3D photography data should appear in a global GIS(cloud-system, like on, e.g., google earth. This contribution also deals with exposing new 3D photographs to document monuments of importance for Cultural Heritage, including the use of 3D and single lense cameras from a 10m telescope staff, to be used for extremely low earth based airborne 3D photography, as well as for "underwater staff photography". In addition it is reported on the use of captive balloon and drone platforms for 3D photography in Cultural Heritage. It is liked to emphasize, the still underestimated 3D effect on real objects even allows, e.g., the spatial perception of extremely small scratches as well as of nuances in

  19. 3D Systems” ‘Stuck in the Middle’ of the 3D Printer Boom?

    NARCIS (Netherlands)

    A. Hoffmann (Alan)

    2014-01-01

    textabstract3D Systems, the pioneer of 3D printing, predicted a future where "kids from 8 to 80" could design and print their ideas at home. By 2013, 9 years after the creation of the first working 3D printer, there were more than 30 major 3D printing companies competing for market share. 3DS and

  20. 3D Scientific Visualization with Blender

    Science.gov (United States)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender (an open source visualization suite widely used in the entertainment and gaming industries) for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  1. Remote Collaborative 3D Printing - Process Investigation

    Science.gov (United States)

    2016-04-01

    COLLABORATIVE 3D PRINTING - PROCESS INVESTIGATION Cody M. Reese, PE CAD MODEL PRINT MODEL PRINT PREVIEW PRINTED PART AERIAL VIRTUAL This...REMOTE COLLABORATIVE 3D PRINTING - PROCESS INVESTIGATION 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Cody M. Reese...release; distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT The Remote Collaborative 3D Printing project is a collaboration between

  2. Microfabricating 3D Structures by Laser Origami

    Science.gov (United States)

    2011-11-09

    10.1117/2.1201111.003952 Microfabricating 3D structures by laser origami Alberto Piqué, Scott Mathews, Andrew Birnbaum, and Nicholas Charipar A new...folding known as origami allows the transformation of flat patterns into 3D shapes. A similar approach can be used to generate 3D structures com... geometries . The overarching challenge is to move away from traditional planar semiconductor photolitho- graphic techniques, which severely limit the type of

  3. 3D Scientific Visualization with Blender

    Science.gov (United States)

    Kent, Brian R.

    2015-03-01

    This is the first book written on using Blender for scientific visualization. It is a practical and interesting introduction to Blender for understanding key parts of 3D rendering and animation that pertain to the sciences via step-by-step guided tutorials. 3D Scientific Visualization with Blender takes you through an understanding of 3D graphics and modelling for different visualization scenarios in the physical sciences.

  4. 3D images and expert system

    International Nuclear Information System (INIS)

    Hasegawa, Jun-ichi

    1998-01-01

    This paper presents an expert system called 3D-IMPRESS for supporting applications of three dimensional (3D) image processing. This system can automatically construct a 3D image processing procedure based on a pictorial example of the goal given by a user. In the paper, to evaluate the performance of the system, it was applied to construction of procedures for extracting specific component figures from practical chest X-ray CT images. (author)

  5. ERP system for 3D printing industry

    Directory of Open Access Journals (Sweden)

    Deaky Bogdan

    2017-01-01

    Full Text Available GOCREATE is an original cloud-based production management and optimization service which helps 3D printing service providers to use their resources better. The proposed Enterprise Resource Planning system can significantly increase income through improved productivity. With GOCREATE, the 3D printing service providers get a much higher production efficiency at a much lower licensing cost, to increase their competitiveness in the fast growing 3D printing market.

  6. Perspectives on Materials Science in 3D

    DEFF Research Database (Denmark)

    Juul Jensen, Dorte

    2012-01-01

    Materials characterization in 3D has opened a new era in materials science, which is discussed in this paper. The original motivations and visions behind the development of one of the new 3D techniques, namely the three dimensional x-ray diffraction (3DXRD) method, are presented and the route...... to its implementation is described. The present status of materials science in 3D is illustrated by examples related to recrystallization. Finally, challenges and suggestions for the future success for 3D Materials Science relating to hardware evolution, data analysis, data exchange and modeling...

  7. Getting started in 3D with Maya

    CERN Document Server

    Watkins, Adam

    2012-01-01

    Deliver professional-level 3D content in no time with this comprehensive guide to 3D animation with Maya. With over 12 years of training experience, plus several award winning students under his belt, author Adam Watkins is the ideal mentor to get you up to speed with 3D in Maya. Using a structured and pragmatic approach Getting Started in 3D with Maya begins with basic theory of fundamental techniques, then builds on this knowledge using practical examples and projects to put your new skills to the test. Prepared so that you can learn in an organic fashion, each chapter builds on the know

  8. Illustrating Mathematics using 3D Printers

    OpenAIRE

    Knill, Oliver; Slavkovsky, Elizabeth

    2013-01-01

    3D printing technology can help to visualize proofs in mathematics. In this document we aim to illustrate how 3D printing can help to visualize concepts and mathematical proofs. As already known to educators in ancient Greece, models allow to bring mathematics closer to the public. The new 3D printing technology makes the realization of such tools more accessible than ever. This is an updated version of a paper included in book Low-Cost 3D Printing for science, education and Sustainable Devel...

  9. A 3d game in python

    OpenAIRE

    Xu, Minghui

    2014-01-01

    3D game has widely been accepted and loved by many game players. More and more different kinds of 3D games were developed to feed people’s needs. The most common programming language for development of 3D game is C++ nowadays. Python is a high-level scripting language. It is simple and clear. The concise syntax could speed up the development cycle. This project was to develop a 3D game using only Python. The game is about how a cat lives in the street. In order to live, the player need...

  10. Dimensional accuracy of 3D printed vertebra

    Science.gov (United States)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  11. A Sketching Interface for Freeform 3D Modeling

    Science.gov (United States)

    Igarashi, Takeo

    This chapter introduces Teddy, a sketch-based modeling system to quickly and easily design freeform models such as stuffed animals and other rotund objects. The user draws several 2D freeform strokes interactively on the screen and the system automatically constructs plausible 3D polygonal surfaces. Our system supports several modeling operations, including the operation to construct a 3D polygonal surface from a 2D silhouette drawn by the user: it inflates the region surrounded by the silhouette making a wide area fat, and a narrow area thin. Teddy, our prototype system, is implemented as a Java program, and the mesh construction is done in real-time on a standard PC. Our informal user study showed that a first-time user masters the operations within 10 minutes, and can construct interesting 3D models within minutes. We also report the result of a case study where a high school teacher taught various 3D concepts in geography using the system.

  12. 3-D Velocity Estimation for Two Planes in vivo

    DEFF Research Database (Denmark)

    Holbek, Simon; Pihl, Michael Johannes; Ewertsen, Caroline

    2014-01-01

    3-D velocity vectors can provide additional flow information applicable for diagnosing cardiovascular diseases e.g. by estimating the out-of-plane velocity component. A 3-D version of the Transverse Oscillation (TO) method has previously been used to obtain this information in a carotid flow...... and stored on the experimental scanner SARUS. The full 3-D velocity profile can be created and examined at peak-systole and end-diastole without ECG gating in two planes. Maximum out-of-plane velocities for the three peak-systoles and end-diastoles were 68.5 5.1 cm/s and 26.3 3.3 cm/s, respectively....... In the longitudinal plane, average maximum peak velocity in flow direction was 65.2 14.0 cm/s at peak-systole and 33.6 4.3 cm/s at end-diastole. A commercial BK Medical ProFocus UltraView scanner using a spectral estimator gave 79.3 cm/s and 14.6 cm/s for the same volunteer. This demonstrates that real-time 3-D...

  13. 3D exploitation of large urban photo archives

    Science.gov (United States)

    Cho, Peter; Snavely, Noah; Anderson, Ross

    2010-04-01

    Recent work in computer vision has demonstrated the potential to automatically recover camera and scene geometry from large collections of uncooperatively-collected photos. At the same time, aerial ladar and Geographic Information System (GIS) data are becoming more readily accessible. In this paper, we present a system for fusing these data sources in order to transfer 3D and GIS information into outdoor urban imagery. Applying this system to 1000+ pictures shot of the lower Manhattan skyline and the Statue of Liberty, we present two proof-of-concept examples of geometry-based photo enhancement which are difficult to perform via conventional image processing: feature annotation and image-based querying. In these examples, high-level knowledge projects from 3D world-space into georegistered 2D image planes and/or propagates between different photos. Such automatic capabilities lay the groundwork for future real-time labeling of imagery shot in complex city environments by mobile smart phones.

  14. Gesture Interaction Browser-Based 3D Molecular Viewer.

    Science.gov (United States)

    Virag, Ioan; Stoicu-Tivadar, Lăcrămioara; Crişan-Vida, Mihaela

    2016-01-01

    The paper presents an open source system that allows the user to interact with a 3D molecular viewer using associated hand gestures for rotating, scaling and panning the rendered model. The novelty of this approach is that the entire application is browser-based and doesn't require installation of third party plug-ins or additional software components in order to visualize the supported chemical file formats. This kind of solution is suitable for instruction of users in less IT oriented environments, like medicine or chemistry. For rendering various molecular geometries our team used GLmol (a molecular viewer written in JavaScript). The interaction with the 3D models is made with Leap Motion controller that allows real-time tracking of the user's hand gestures. The first results confirmed that the resulting application leads to a better way of understanding various types of translational bioinformatics related problems in both biomedical research and education.

  15. Lagrangian 3D tracking of fluorescent microscopic objects in motion

    OpenAIRE

    Darnige, T.; Figueroa-Morales, N.; Bohec, P.; Lindner, A.; Clément, E.

    2016-01-01

    We describe the development of a tracking device, mounted on an epi-fluorescent inverted microscope, suited to obtain time resolved 3D Lagrangian tracks of fluorescent passive or active micro-objects in micro-fluidic devices. The system is based on real-time image processing, determining the displacement of a x,y mechanical stage to keep the chosen object at a fixed position in the observation frame. The z displacement is based on the refocusing of the fluorescent object determining the displ...

  16. 3-D computer graphics based on integral photography.

    Science.gov (United States)

    Naemura, T; Yoshida, T; Harashima, H

    2001-02-12

    Integral photography (IP), which is one of the ideal 3-D photographic technologies, can be regarded as a method of capturing and displaying light rays passing through a plane. The NHK Science and Technical Research Laboratories have developed a real-time IP system using an HDTV camera and an optical fiber array. In this paper, the authors propose a method of synthesizing arbitrary views from IP images captured by the HDTV camera. This is a kind of image-based rendering system, founded on the 4-D data space Representation of light rays. Experimental results show the potential to improve the quality of images rendered by computer graphics techniques.

  17. A Lightweight Surface Reconstruction Method for Online 3D Scanning Point Cloud Data Oriented toward 3D Printing

    Directory of Open Access Journals (Sweden)

    Buyun Sheng

    2018-01-01

    Full Text Available The existing surface reconstruction algorithms currently reconstruct large amounts of mesh data. Consequently, many of these algorithms cannot meet the efficiency requirements of real-time data transmission in a web environment. This paper proposes a lightweight surface reconstruction method for online 3D scanned point cloud data oriented toward 3D printing. The proposed online lightweight surface reconstruction algorithm is composed of a point cloud update algorithm (PCU, a rapid iterative closest point algorithm (RICP, and an improved Poisson surface reconstruction algorithm (IPSR. The generated lightweight point cloud data are pretreated using an updating and rapid registration method. The Poisson surface reconstruction is also accomplished by a pretreatment to recompute the point cloud normal vectors; this approach is based on a least squares method, and the postprocessing of the PDE patch generation was based on biharmonic-like fourth-order PDEs, which effectively reduces the amount of reconstructed mesh data and improves the efficiency of the algorithm. This method was verified using an online personalized customization system that was developed with WebGL and oriented toward 3D printing. The experimental results indicate that this method can generate a lightweight 3D scanning mesh rapidly and efficiently in a web environment.

  18. Designing stereoscopic information visualization for 3D-TV: What can we can learn from S3D gaming?

    Science.gov (United States)

    Schild, Jonas; Masuch, Maic

    2012-03-01

    This paper explores graphical design and spatial alignment of visual information and graphical elements into stereoscopically filmed content, e.g. captions, subtitles, and especially more complex elements in 3D-TV productions. The method used is a descriptive analysis of existing computer- and video games that have been adapted for stereoscopic display using semi-automatic rendering techniques (e.g. Nvidia 3D Vision) or games which have been specifically designed for stereoscopic vision. Digital games often feature compelling visual interfaces that combine high usability with creative visual design. We explore selected examples of game interfaces in stereoscopic vision regarding their stereoscopic characteristics, how they draw attention, how we judge effect and comfort and where the interfaces fail. As a result, we propose a list of five aspects which should be considered when designing stereoscopic visual information: explicit information, implicit information, spatial reference, drawing attention, and vertical alignment. We discuss possible consequences, opportunities and challenges for integrating visual information elements into 3D-TV content. This work shall further help to improve current editing systems and identifies a need for future editing systems for 3DTV, e.g., live editing and real-time alignment of visual information into 3D footage.

  19. A novel single-step procedure for the calibration of the mounting parameters of a multi-camera terrestrial mobile mapping system

    Science.gov (United States)

    Habib, A.; Kersting, P.; Bang, K.; Rau, J.

    2011-12-01

    Mobile Mapping Systems (MMS) can be defined as moving platforms which integrates a set of imaging sensors and a position and orientation system (POS) for the collection of geo-spatial information. In order to fully explore the potential accuracy of such systems and guarantee accurate multi-sensor integration, a careful system calibration must be carried out. System calibration involves individual sensor calibration as well as the estimation of the inter-sensor geometric relationship. This paper tackles a specific component of the system calibration process of a multi-camera MMS - the estimation of the relative orientation parameters among the cameras, i.e., the inter-camera geometric relationship (lever-arm offsets and boresight angles among the cameras). For that purpose, a novel single step procedure, which is easy to implement and not computationally intensive, will be introduced. The proposed method is implemented in such a way that it can also be used for the estimation of the mounting parameters among the cameras and the IMU body frame, in case of directly georeferenced systems. The performance of the proposed method is evaluated through experimental results using simulated data. A comparative analysis between the proposed single-step and the two-step, which makes use of the traditional bundle adjustment procedure, is demonstrated.

  20. Towards sustainable and clean 3D Geoinformation

    NARCIS (Netherlands)

    Stoter, J.E.; Ledoux, H.; Zlatanova, S.; Biljecki, F.; Kolbe, T.H.; Bill, R.; Donaubauer, A.

    2016-01-01

    This paper summarises the on going research activities of the 3D Geoinformation Group at the Delft University of Technology. The main challenge underpinning the research of this group is providing clean and appropriate 3D data about our environment in order to serve a wide variety of applications.

  1. Pattern recognition: invariants in 3D

    International Nuclear Information System (INIS)

    Proriol, J.

    1992-01-01

    In e + e - events, the jets have a spherical 3D symmetry. A set of invariants are defined for 3D objects with a spherical symmetry. These new invariants are used to tag the number of jets in e + e - events. (K.A.) 3 refs

  2. 3D Printing: What Are the Hazards?

    Science.gov (United States)

    Randolph, Susan A

    2018-03-01

    As the popularity of three-dimensional (3D) printers increases, more research will be conducted to evaluate the benefits and risks of this technology. Occupational health professionals should stay abreast of new recommendations to protect workers from exposure to 3D printer emissions.

  3. Illustrating the disassembly of 3D models

    KAUST Repository

    Guo, Jianwei; Yan, Dongming; Li, Er; Dong, Weiming; Wonka, Peter; Zhang, Xiaopeng

    2013-01-01

    We present a framework for the automatic disassembly of 3D man-made models and the illustration of the disassembly process. Given an assembled 3D model, we first analyze the individual parts using sharp edge loops and extract the contact faces

  4. 3D, or Not to Be?

    Science.gov (United States)

    Norbury, Keith

    2012-01-01

    It may be too soon for students to be showing up for class with popcorn and gummy bears, but technology similar to that behind the 3D blockbuster movie "Avatar" is slowly finding its way into college classrooms. 3D classroom projectors are taking students on fantastic voyages inside the human body, to the ruins of ancient Greece--even to faraway…

  5. Embedding complex objects with 3d printing

    KAUST Repository

    Hussain, Muhammad Mustafa; Diaz, Cordero Marlon Steven

    2017-01-01

    A CMOS technology-compatible fabrication process for flexible CMOS electronics embedded during additive manufacturing (i.e. 3D printing). A method for such a process may include printing a first portion of a 3D structure; pausing the step

  6. 3D Printing of Molecular Models

    Science.gov (United States)

    Gardner, Adam; Olson, Arthur

    2016-01-01

    Physical molecular models have played a valuable role in our understanding of the invisible nano-scale world. We discuss 3D printing and its use in producing models of the molecules of life. Complex biomolecular models, produced from 3D printed parts, can demonstrate characteristics of molecular structure and function, such as viral self-assembly,…

  7. 3D printing of functional structures

    NARCIS (Netherlands)

    Krijnen, Gijsbertus J.M.

    The technology colloquial known as ‘3D printing’ has developed in such diversity in printing technologies and application fields that meanwhile it seems anything is possible. However, clearly the ideal 3D Printer, with high resolution, multi-material capability, fast printing, etc. is yet to be

  8. 3D Printing. What's the Harm?

    Science.gov (United States)

    Love, Tyler S.; Roy, Ken

    2016-01-01

    Health concerns from 3D printing were first documented by Stephens, Azimi, Orch, and Ramos (2013), who found that commercially available 3D printers were producing hazardous levels of ultrafine particles (UFPs) and volatile organic compounds (VOCs) when plastic materials were melted through the extruder. UFPs are particles less than 100 nanometers…

  9. 3D Printed Block Copolymer Nanostructures

    Science.gov (United States)

    Scalfani, Vincent F.; Turner, C. Heath; Rupar, Paul A.; Jenkins, Alexander H.; Bara, Jason E.

    2015-01-01

    The emergence of 3D printing has dramatically advanced the availability of tangible molecular and extended solid models. Interestingly, there are few nanostructure models available both commercially and through other do-it-yourself approaches such as 3D printing. This is unfortunate given the importance of nanotechnology in science today. In this…

  10. 3D-printed cereal foods

    NARCIS (Netherlands)

    Noort, M.; Bommel, K. van; Renzetti, S.

    2017-01-01

    Additive manufacturing, also known as 3D printing, is an up-and-coming production technology based on layer-by-layer deposition of material to reproduce a computer-generated 3D design. Additive manufacturing is a collective term used for a variety of technologies, such as fused deposition modeling

  11. A Framework for 3d Printing

    DEFF Research Database (Denmark)

    Pilkington, Alan; Frandsen, Thomas; Kapetaniou, Chrystalla

    3D printing technologies and processes offer such a radical range of options for firms that we currently lack a structured way of recording possible impact and recommending actions for managers. The changes arising from 3d printing includes more than just new options for product design, but also...

  12. The 3D-city model

    DEFF Research Database (Denmark)

    Holmgren, Steen; Rüdiger, Bjarne; Tournay, Bruno

    2001-01-01

    We have worked with the construction and use of 3D city models for about ten years. This work has given us valuable experience concerning model methodology. In addition to this collection of knowledge, our perception of the concept of city models has changed radically. In order to explain...... of 3D city models....

  13. 3D Programmable Micro Self Assembly

    National Research Council Canada - National Science Library

    Bohringer, Karl F; Parviz, Babak A; Klavins, Eric

    2005-01-01

    .... We have developed a "self assembly tool box" consisting of a range of methods for micro-scale self-assembly in 2D and 3D We have shown physical demonstrations of simple 3D self-assemblies which lead...

  14. Wow! 3D Content Awakens the Classroom

    Science.gov (United States)

    Gordon, Dan

    2010-01-01

    From her first encounter with stereoscopic 3D technology designed for classroom instruction, Megan Timme, principal at Hamilton Park Pacesetter Magnet School in Dallas, sensed it could be transformative. Last spring, when she began pilot-testing 3D content in her third-, fourth- and fifth-grade classrooms, Timme wasn't disappointed. Students…

  15. Digital Dentistry — 3D Printing Applications

    Directory of Open Access Journals (Sweden)

    Zaharia Cristian

    2017-03-01

    Full Text Available Three-dimensional (3D printing is an additive manufacturing method in which a 3D item is formed by laying down successive layers of material. 3D printers are machines that produce representations of objects either planned with a CAD program or scanned with a 3D scanner. Printing is a method for replicating text and pictures, typically with ink on paper. We can print different dental pieces using different methods such as selective laser sintering (SLS, stereolithography, fused deposition modeling, and laminated object manufacturing. The materials are certified for printing individual impression trays, orthodontic models, gingiva mask, and different prosthetic objects. The material can reach a flexural strength of more than 80 MPa. 3D printing takes the effectiveness of digital projects to the production phase. Dental laboratories are able to produce crowns, bridges, stone models, and various orthodontic appliances by methods that combine oral scanning, 3D printing, and CAD/CAM design. Modern 3D printing has been used for the development of prototypes for several years, and it has begun to find its use in the world of manufacturing. Digital technology and 3D printing have significantly elevated the rate of success in dental implantology using custom surgical guides and improving the quality and accuracy of dental work.

  16. Case study of 3D fingerprints applications.

    Directory of Open Access Journals (Sweden)

    Feng Liu

    Full Text Available Human fingers are 3D objects. More information will be provided if three dimensional (3D fingerprints are available compared with two dimensional (2D fingerprints. Thus, this paper firstly collected 3D finger point cloud data by Structured-light Illumination method. Additional features from 3D fingerprint images are then studied and extracted. The applications of these features are finally discussed. A series of experiments are conducted to demonstrate the helpfulness of 3D information to fingerprint recognition. Results show that a quick alignment can be easily implemented under the guidance of 3D finger shape feature even though this feature does not work for fingerprint recognition directly. The newly defined distinctive 3D shape ridge feature can be used for personal authentication with Equal Error Rate (EER of ~8.3%. Also, it is helpful to remove false core point. Furthermore, a promising of EER ~1.3% is realized by combining this feature with 2D features for fingerprint recognition which indicates the prospect of 3D fingerprint recognition.

  17. Immersive 3D Geovisualization in Higher Education

    Science.gov (United States)

    Philips, Andrea; Walz, Ariane; Bergner, Andreas; Graeff, Thomas; Heistermann, Maik; Kienzler, Sarah; Korup, Oliver; Lipp, Torsten; Schwanghart, Wolfgang; Zeilinger, Gerold

    2015-01-01

    In this study, we investigate how immersive 3D geovisualization can be used in higher education. Based on MacEachren and Kraak's geovisualization cube, we examine the usage of immersive 3D geovisualization and its usefulness in a research-based learning module on flood risk, called GEOSimulator. Results of a survey among participating students…

  18. a Quadtree Organization Construction and Scheduling Method for Urban 3d Model Based on Weight

    Science.gov (United States)

    Yao, C.; Peng, G.; Song, Y.; Duan, M.

    2017-09-01

    The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weights according to certain rules, perform Quadtree construction and schedule rendering according to different rendering weights. Also proposed an algorithm for extracting bounding box extraction based on model drawing primitives to generate LOD model automatically. Using the algorithm proposed in this paper, developed a 3D urban planning&management software, the practice has showed the algorithm is efficient and feasible, the render frame rate of big scene and small scene are both stable at around 25 frames.

  19. A QUADTREE ORGANIZATION CONSTRUCTION AND SCHEDULING METHOD FOR URBAN 3D MODEL BASED ON WEIGHT

    Directory of Open Access Journals (Sweden)

    C. Yao

    2017-09-01

    Full Text Available The increasement of Urban 3D model precision and data quantity puts forward higher requirements for real-time rendering of digital city model. Improving the organization, management and scheduling of 3D model data in 3D digital city can improve the rendering effect and efficiency. This paper takes the complexity of urban models into account, proposes a Quadtree construction and scheduling rendering method for Urban 3D model based on weight. Divide Urban 3D model into different rendering weights according to certain rules, perform Quadtree construction and schedule rendering according to different rendering weights. Also proposed an algorithm for extracting bounding box extraction based on model drawing primitives to generate LOD model automatically. Using the algorithm proposed in this paper, developed a 3D urban planning&management software, the practice has showed the algorithm is efficient and feasible, the render frame rate of big scene and small scene are both stable at around 25 frames.

  20. Serum induced degradation of 3D DNA box origami observed by high speed atomic force microscope

    DEFF Research Database (Denmark)

    Jiang, Zaixing; Zhang, Shuai; Yang, Chuanxu

    2015-01-01

    3D DNA origami holds tremendous potential to encapsulate and selectively release therapeutic drugs. Observations of real-time performance of 3D DNA origami structures in physiological environment will contribute much to its further applications. Here, we investigate the degradation kinetics of 3D...... DNA box origami in serum using high-speed atomic force microscope optimized for imaging 3D DNA origami in real time. The time resolution allows characterizing the stages of serum effects on individual 3D DNA box origami with nanometer resolution. Our results indicate that the whole digest process...... is a combination of a rapid collapse phase and a slow degradation phase. The damages of box origami mainly happen in the collapse phase. Thus, the structure stability of 3D DNA box origami should be further improved, especially in the collapse phase, before clinical applications...

  1. 3D Human cartilage surface characterization by optical coherence tomography

    International Nuclear Information System (INIS)

    Brill, Nicolai; Riedel, Jörn; Schmitt, Robert; Tingart, Markus; Jahr, Holger; Nebelung, Sven; Truhn, Daniel; Pufe, Thomas

    2015-01-01

    Early diagnosis and treatment of cartilage degeneration is of high clinical interest. Loss of surface integrity is considered one of the earliest and most reliable signs of degeneration, but cannot currently be evaluated objectively. Optical Coherence Tomography (OCT) is an arthroscopically available light-based non-destructive real-time imaging technology that allows imaging at micrometre resolutions to millimetre depths. As OCT-based surface evaluation standards remain to be defined, the present study investigated the diagnostic potential of 3D surface profile parameters in the comprehensive evaluation of cartilage degeneration. To this end, 45 cartilage samples of different degenerative grades were obtained from total knee replacements (2 males, 10 females; mean age 63.8 years), cut to standard size and imaged using a spectral-domain OCT device (Thorlabs, Germany). 3D OCT datasets of 8  ×  8, 4  ×  4 and 1  ×  1 mm (width  ×  length) were obtained and pre-processed (image adjustments, morphological filtering). Subsequent automated surface identification algorithms were used to obtain the 3D primary profiles, which were then filtered and processed using established algorithms employing ISO standards. The 3D surface profile thus obtained was used to calculate a set of 21 3D surface profile parameters, i.e. height (e.g. Sa), functional (e.g. Sk), hybrid (e.g. Sdq) and segmentation-related parameters (e.g. Spd). Samples underwent reference histological assessment according to the Degenerative Joint Disease classification. Statistical analyses included calculation of Spearman’s rho and assessment of inter-group differences using the Kruskal Wallis test. Overall, the majority of 3D surface profile parameters revealed significant degeneration-dependent differences and correlations with the exception of severe end-stage degeneration and were of distinct diagnostic value in the assessment of surface integrity. None of the 3D

  2. Inclined nanoimprinting lithography for 3D nanopatterning

    International Nuclear Information System (INIS)

    Liu Zhan; Bucknall, David G; Allen, Mark G

    2011-01-01

    We report a non-conventional shear-force-driven nanofabrication approach, inclined nanoimprint lithography (INIL), for producing 3D nanostructures of varying heights on planar substrates in a single imprinting step. Such 3D nanostructures are fabricated by exploiting polymer anisotropic dewetting where the degree of anisotropy can be controlled by the magnitude of the inclination angle. The feature size is reduced from micron scale of the template to a resultant nanoscale pattern. The underlying INIL mechanism is investigated both experimentally and theoretically. The results indicate that the shear force generated at a non-zero inclination angle induced by the INIL apparatus essentially leads to asymmetry in the polymer flow direction ultimately resulting in 3D nanopatterns with different heights. INIL removes the requirements in conventional nanolithography of either utilizing 3D templates or using multiple lithographic steps. This technique enables various 3D nanoscale devices including angle-resolved photonic and plasmonic crystals to be fabricated.

  3. Density-Based 3D Shape Descriptors

    Directory of Open Access Journals (Sweden)

    Schmitt Francis

    2007-01-01

    Full Text Available We propose a novel probabilistic framework for the extraction of density-based 3D shape descriptors using kernel density estimation. Our descriptors are derived from the probability density functions (pdf of local surface features characterizing the 3D object geometry. Assuming that the shape of the 3D object is represented as a mesh consisting of triangles with arbitrary size and shape, we provide efficient means to approximate the moments of geometric features on a triangle basis. Our framework produces a number of 3D shape descriptors that prove to be quite discriminative in retrieval applications. We test our descriptors and compare them with several other histogram-based methods on two 3D model databases, Princeton Shape Benchmark and Sculpteur, which are fundamentally different in semantic content and mesh quality. Experimental results show that our methodology not only improves the performance of existing descriptors, but also provides a rigorous framework to advance and to test new ones.

  4. 3D-grafiikka ja pelimoottorit

    OpenAIRE

    Sillanpää, Otto

    2014-01-01

    Tässä opinnäytetyössä tutkitaan miten 3D-mallit saadaan sellaiseen muotoon, että ne olisivat käytettävissä eri pelimoottoreissa. Tutkimuksen tarkoituksena on selvittää, miten luodaan 3D-malleja pelimoottoreihin, sekä miten 3D-mallinnusohjelmat ja pelimoottorit eroavat toisistaan, kun käsitellään 3D-malleja. Tässä työssä pelimoottoreina toimivat Valven Source sekä Epic Gamesin Unreal Engine 3. 3D-mallinnusohjelmista käytössä olivat Autodeskin 3ds Max 2014 ja Blender Foundationin Blender 2.7...

  5. BEAMS3D Neutral Beam Injection Model

    Energy Technology Data Exchange (ETDEWEB)

    Lazerson, Samuel

    2014-04-14

    With the advent of applied 3D fi elds in Tokamaks and modern high performance stellarators, a need has arisen to address non-axisymmetric effects on neutral beam heating and fueling. We report on the development of a fully 3D neutral beam injection (NBI) model, BEAMS3D, which addresses this need by coupling 3D equilibria to a guiding center code capable of modeling neutral and charged particle trajectories across the separatrix and into the plasma core. Ionization, neutralization, charge-exchange, viscous velocity reduction, and pitch angle scattering are modeled with the ADAS atomic physics database [1]. Benchmark calculations are presented to validate the collisionless particle orbits, neutral beam injection model, frictional drag, and pitch angle scattering effects. A calculation of neutral beam heating in the NCSX device is performed, highlighting the capability of the code to handle 3D magnetic fields.

  6. Fabrication of 3D Silicon Sensors

    Energy Technology Data Exchange (ETDEWEB)

    Kok, A.; Hansen, T.E.; Hansen, T.A.; Lietaer, N.; Summanwar, A.; /SINTEF, Oslo; Kenney, C.; Hasi, J.; /SLAC; Da Via, C.; /Manchester U.; Parker, S.I.; /Hawaii U.

    2012-06-06

    Silicon sensors with a three-dimensional (3-D) architecture, in which the n and p electrodes penetrate through the entire substrate, have many advantages over planar silicon sensors including radiation hardness, fast time response, active edge and dual readout capabilities. The fabrication of 3D sensors is however rather complex. In recent years, there have been worldwide activities on 3D fabrication. SINTEF in collaboration with Stanford Nanofabrication Facility have successfully fabricated the original (single sided double column type) 3D detectors in two prototype runs and the third run is now on-going. This paper reports the status of this fabrication work and the resulted yield. The work of other groups such as the development of double sided 3D detectors is also briefly reported.

  7. Maintaining and troubleshooting your 3D printer

    CERN Document Server

    Bell, Charles

    2014-01-01

    Maintaining and Troubleshooting Your 3D Printer by Charles Bell is your guide to keeping your 3D printer running through preventive maintenance, repair, and diagnosing and solving problems in 3D printing. If you've bought or built a 3D printer such as a MakerBot only to be confounded by jagged edges, corner lift, top layers that aren't solid, or any of a myriad of other problems that plague 3D printer enthusiasts, then here is the book to help you get past all that and recapture the joy of creative fabrication. The book also includes valuable tips for builders and those who want to modify the

  8. The psychology of the 3D experience

    Science.gov (United States)

    Janicke, Sophie H.; Ellis, Andrew

    2013-03-01

    With 3D televisions expected to reach 50% home saturation as early as 2016, understanding the psychological mechanisms underlying the user response to 3D technology is critical for content providers, educators and academics. Unfortunately, research examining the effects of 3D technology has not kept pace with the technology's rapid adoption, resulting in large-scale use of a technology about which very little is actually known. Recognizing this need for new research, we conducted a series of studies measuring and comparing many of the variables and processes underlying both 2D and 3D media experiences. In our first study, we found narratives within primetime dramas had the power to shift viewer attitudes in both 2D and 3D settings. However, we found no difference in persuasive power between 2D and 3D content. We contend this lack of effect was the result of poor conversion quality and the unique demands of 3D production. In our second study, we found 3D technology significantly increased enjoyment when viewing sports content, yet offered no added enjoyment when viewing a movie trailer. The enhanced enjoyment of the sports content was shown to be the result of heightened emotional arousal and attention in the 3D condition. We believe the lack of effect found for the movie trailer may be genre-related. In our final study, we found 3D technology significantly enhanced enjoyment of two video games from different genres. The added enjoyment was found to be the result of an increased sense of presence.

  9. 3D Visualization Development of SIUE Campus

    Science.gov (United States)

    Nellutla, Shravya

    Geographic Information Systems (GIS) has progressed from the traditional map-making to the modern technology where the information can be created, edited, managed and analyzed. Like any other models, maps are simplified representations of real world. Hence visualization plays an essential role in the applications of GIS. The use of sophisticated visualization tools and methods, especially three dimensional (3D) modeling, has been rising considerably due to the advancement of technology. There are currently many off-the-shelf technologies available in the market to build 3D GIS models. One of the objectives of this research was to examine the available ArcGIS and its extensions for 3D modeling and visualization and use them to depict a real world scenario. Furthermore, with the advent of the web, a platform for accessing and sharing spatial information on the Internet, it is possible to generate interactive online maps. Integrating Internet capacity with GIS functionality redefines the process of sharing and processing the spatial information. Enabling a 3D map online requires off-the-shelf GIS software, 3D model builders, web server, web applications and client server technologies. Such environments are either complicated or expensive because of the amount of hardware and software involved. Therefore, the second objective of this research was to investigate and develop simpler yet cost-effective 3D modeling approach that uses available ArcGIS suite products and the free 3D computer graphics software for designing 3D world scenes. Both ArcGIS Explorer and ArcGIS Online will be used to demonstrate the way of sharing and distributing 3D geographic information on the Internet. A case study of the development of 3D campus for the Southern Illinois University Edwardsville is demonstrated.

  10. Pathways for Learning from 3D Technology

    Science.gov (United States)

    Carrier, L. Mark; Rab, Saira S.; Rosen, Larry D.; Vasquez, Ludivina; Cheever, Nancy A.

    2016-01-01

    The purpose of this study was to find out if 3D stereoscopic presentation of information in a movie format changes a viewer's experience of the movie content. Four possible pathways from 3D presentation to memory and learning were considered: a direct connection based on cognitive neuroscience research; a connection through "immersion" in that 3D presentations could provide additional sensorial cues (e.g., depth cues) that lead to a higher sense of being surrounded by the stimulus; a connection through general interest such that 3D presentation increases a viewer’s interest that leads to greater attention paid to the stimulus (e.g., "involvement"); and a connection through discomfort, with the 3D goggles causing discomfort that interferes with involvement and thus with memory. The memories of 396 participants who viewed two-dimensional (2D) or 3D movies at movie theaters in Southern California were tested. Within three days of viewing a movie, participants filled out an online anonymous questionnaire that queried them about their movie content memories, subjective movie-going experiences (including emotional reactions and "presence") and demographic backgrounds. The responses to the questionnaire were subjected to path analyses in which several different links between 3D presentation to memory (and other variables) were explored. The results showed there were no effects of 3D presentation, either directly or indirectly, upon memory. However, the largest effects of 3D presentation were on emotions and immersion, with 3D presentation leading to reduced positive emotions, increased negative emotions and lowered immersion, compared to 2D presentations. PMID:28078331

  11. 2D to 3D conversion implemented in different hardware

    Science.gov (United States)

    Ramos-Diaz, Eduardo; Gonzalez-Huitron, Victor; Ponomaryov, Volodymyr I.; Hernandez-Fragoso, Araceli

    2015-02-01

    Conversion of available 2D data for release in 3D content is a hot topic for providers and for success of the 3D applications, in general. It naturally completely relies on virtual view synthesis of a second view given by original 2D video. Disparity map (DM) estimation is a central task in 3D generation but still follows a very difficult problem for rendering novel images precisely. There exist different approaches in DM reconstruction, among them manually and semiautomatic methods that can produce high quality DMs but they demonstrate hard time consuming and are computationally expensive. In this paper, several hardware implementations of designed frameworks for an automatic 3D color video generation based on 2D real video sequence are proposed. The novel framework includes simultaneous processing of stereo pairs using the following blocks: CIE L*a*b* color space conversions, stereo matching via pyramidal scheme, color segmentation by k-means on an a*b* color plane, and adaptive post-filtering, DM estimation using stereo matching between left and right images (or neighboring frames in a video), adaptive post-filtering, and finally, the anaglyph 3D scene generation. Novel technique has been implemented on DSP TMS320DM648, Matlab's Simulink module over a PC with Windows 7, and using graphic card (NVIDIA Quadro K2000) demonstrating that the proposed approach can be applied in real-time processing mode. The time values needed, mean Similarity Structural Index Measure (SSIM) and Bad Matching Pixels (B) values for different hardware implementations (GPU, Single CPU, and DSP) are exposed in this paper.

  12. Medical 3D Printing for the Radiologist

    Science.gov (United States)

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A.; Cai, Tianrun; Kumamaru, Kanako K.; George, Elizabeth; Wake, Nicole; Caterson, Edward J.; Pomahac, Bohdan; Ho, Vincent B.; Grant, Gerald T.

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. ©RSNA, 2015 PMID:26562233

  13. 3D bioprinting of tissues and organs.

    Science.gov (United States)

    Murphy, Sean V; Atala, Anthony

    2014-08-01

    Additive manufacturing, otherwise known as three-dimensional (3D) printing, is driving major innovations in many areas, such as engineering, manufacturing, art, education and medicine. Recent advances have enabled 3D printing of biocompatible materials, cells and supporting components into complex 3D functional living tissues. 3D bioprinting is being applied to regenerative medicine to address the need for tissues and organs suitable for transplantation. Compared with non-biological printing, 3D bioprinting involves additional complexities, such as the choice of materials, cell types, growth and differentiation factors, and technical challenges related to the sensitivities of living cells and the construction of tissues. Addressing these complexities requires the integration of technologies from the fields of engineering, biomaterials science, cell biology, physics and medicine. 3D bioprinting has already been used for the generation and transplantation of several tissues, including multilayered skin, bone, vascular grafts, tracheal splints, heart tissue and cartilaginous structures. Other applications include developing high-throughput 3D-bioprinted tissue models for research, drug discovery and toxicology.

  14. Medical 3D Printing for the Radiologist.

    Science.gov (United States)

    Mitsouras, Dimitris; Liacouras, Peter; Imanzadeh, Amir; Giannopoulos, Andreas A; Cai, Tianrun; Kumamaru, Kanako K; George, Elizabeth; Wake, Nicole; Caterson, Edward J; Pomahac, Bohdan; Ho, Vincent B; Grant, Gerald T; Rybicki, Frank J

    2015-01-01

    While use of advanced visualization in radiology is instrumental in diagnosis and communication with referring clinicians, there is an unmet need to render Digital Imaging and Communications in Medicine (DICOM) images as three-dimensional (3D) printed models capable of providing both tactile feedback and tangible depth information about anatomic and pathologic states. Three-dimensional printed models, already entrenched in the nonmedical sciences, are rapidly being embraced in medicine as well as in the lay community. Incorporating 3D printing from images generated and interpreted by radiologists presents particular challenges, including training, materials and equipment, and guidelines. The overall costs of a 3D printing laboratory must be balanced by the clinical benefits. It is expected that the number of 3D-printed models generated from DICOM images for planning interventions and fabricating implants will grow exponentially. Radiologists should at a minimum be familiar with 3D printing as it relates to their field, including types of 3D printing technologies and materials used to create 3D-printed anatomic models, published applications of models to date, and clinical benefits in radiology. Online supplemental material is available for this article. (©)RSNA, 2015.

  15. Extra Dimensions: 3D in PDF Documentation

    International Nuclear Information System (INIS)

    Graf, Norman A

    2012-01-01

    Experimental science is replete with multi-dimensional information which is often poorly represented by the two dimensions of presentation slides and print media. Past efforts to disseminate such information to a wider audience have failed for a number of reasons, including a lack of standards which are easy to implement and have broad support. Adobe's Portable Document Format (PDF) has in recent years become the de facto standard for secure, dependable electronic information exchange. It has done so by creating an open format, providing support for multiple platforms and being reliable and extensible. By providing support for the ECMA standard Universal 3D (U3D) and the ISO PRC file format in its free Adobe Reader software, Adobe has made it easy to distribute and interact with 3D content. Until recently, Adobe's Acrobat software was also capable of incorporating 3D content into PDF files from a variety of 3D file formats, including proprietary CAD formats. However, this functionality is no longer available in Acrobat X, having been spun off to a separate company. Incorporating 3D content now requires the additional purchase of a separate plug-in. In this talk we present alternatives based on open source libraries which allow the programmatic creation of 3D content in PDF format. While not providing the same level of access to CAD files as the commercial software, it does provide physicists with an alternative path to incorporate 3D content into PDF files from such disparate applications as detector geometries from Geant4, 3D data sets, mathematical surfaces or tesselated volumes.

  16. Advanced 3D Printers for Cellular Solids

    Science.gov (United States)

    2016-06-30

    06-2016 1-Aug-2014 31-Dec-2015 Final Report: Advanced 3D printers for Cellular Solids The views, opinions and/or findings contained in this report are...2211 3d printing, cellular solids REPORT DOCUMENTATION PAGE 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 10. SPONSOR/MONITOR’S ACRONYM(S) ARO 8...Papers published in non peer-reviewed journals: Final Report: Advanced 3D printers for Cellular Solids Report Title Final Report for DURIP grant W911NF

  17. Pharmacophore definition and 3D searches.

    Science.gov (United States)

    Langer, T; Wolber, G

    2004-12-01

    The most common pharmacophore building concepts based on either 3D structure of the target or ligand information are discussed together with the application of such models as queries for 3D database search. An overview of the key techniques available on the market is given and differences with respect to algorithms used and performance obtained are highlighted. Pharmacophore modelling and 3D database search are shown to be successful tools for enriching screening experiments aimed at the discovery of novel bio-active compounds.: © 2004 Elsevier Ltd . All rights reserved.

  18. 3D radiative transfer in stellar atmospheres

    International Nuclear Information System (INIS)

    Carlsson, M

    2008-01-01

    Three-dimensional (3D) radiative transfer in stellar atmospheres is reviewed with special emphasis on the atmospheres of cool stars and applications. A short review of methods in 3D radiative transfer shows that mature methods exist, both for taking into account radiation as an energy transport mechanism in 3D (magneto-) hydrodynamical simulations of stellar atmospheres and for the diagnostic problem of calculating the emergent spectrum in more detail from such models, both assuming local thermodynamic equilibrium (LTE) and in non-LTE. Such methods have been implemented in several codes, and examples of applications are given.

  19. Nonperturbative summation over 3D discrete topologies

    International Nuclear Information System (INIS)

    Freidel, Laurent; Louapre, David

    2003-01-01

    The group field theories realizing the sum over all triangulations of all topologies of 3D discrete gravity amplitudes are known to be nonuniquely Borel summable. We modify these models to construct a new group field theory which is proved to be uniquely Borel summable, defining in an unambiguous way a nonperturbative sum over topologies in the context of 3D dynamical triangulations and spin foam models. Moreover, we give some arguments to support the fact that, despite our modification, this new model is similar to the original one, and therefore could be taken as a definition of the sum over topologies of 3D quantum gravity amplitudes

  20. 3D background aerodynamics using CFD

    Energy Technology Data Exchange (ETDEWEB)

    Soerensen, N.N.

    2002-11-01

    3D rotor computations for the Greek Geovilogiki (GEO) 44 meter rotor equipped with 19 meters blades are performed. The lift and drag polars are extracted at five spanvise locations r/R= (.37, .55, .71, .82, .93) based on identification of stagnation points between 2D and 3D computations. The inner most sections shows clear evidence of 3D radial pumping, with increased lift compared to 2D values. In contrast to earlier investigated airfoils a very limited impact on the drag values are observed. (au)

  1. 3D Printing the ATLAS' barrel toroid

    CERN Document Server

    Goncalves, Tiago Barreiro

    2016-01-01

    The present report summarizes my work as part of the Summer Student Programme 2016 in the CERN IR-ECO-TSP department (International Relations – Education, Communication & Outreach – Teacher and Student Programmes). Particularly, I worked closely with the S’Cool LAB team on a science education project. This project included the 3D designing, 3D printing, and assembling of a model of the ATLAS’ barrel toroid. A detailed description of the project' development is presented and a short manual on how to use 3D printing software and hardware is attached.

  2. [3D planning in maxillofacial surgery].

    Science.gov (United States)

    Hoarau, R; Zweifel, D; Lanthemann, E; Zrounba, H; Broome, M

    2014-10-01

    The development of new technologies such as three-dimensional (3D) planning has changed the everyday practice in maxillofacial surgery. Rapid prototyping associated with the 3D planning has also enabled the creation of patient specific surgical tools, such as cutting guides. As with all new technologies, uses, practicalities, cost effectiveness and especially benefits for the patients have to be carefully evaluated. In this paper, several examples of 3D planning that have been used in our institution are presented. The advantages such as the accuracy of the reconstructive surgery and decreased operating time, as well as the difficulties have also been addressed.

  3. Participation and 3D Visualization Tools

    DEFF Research Database (Denmark)

    Mullins, Michael; Jensen, Mikkel Holm; Henriksen, Sune

    2004-01-01

    With a departure point in a workshop held at the VR Media Lab at Aalborg University , this paper deals with aspects of public participation and the use of 3D visualisation tools. The workshop grew from a desire to involve a broad collaboration between the many actors in the city through using new...... perceptions of architectural representation in urban design where 3D visualisation techniques are used. It is the authors? general finding that, while 3D visualisation media have the potential to increase understanding of virtual space for the lay public, as well as for professionals, the lay public require...

  4. 3D Bio-Printing Review

    Science.gov (United States)

    Du, Xianbin

    2018-01-01

    Ultimate goal of tissue engineering is to replace pathological or necrotic body tissue or organ by artificial tissue or organ and tissue engineering is a very promising research field. 3D bio-printing is a kind of emerging technologies and a branch of tissue engineering. It has made significant progress in the past decade. 3D bio-printing can realize tissue and organ construction in vitro and has wide application in basic research and pharmacy. This paper is to make an analysis and review on 3D bio-printing from the perspectives of bioink, printing technology and technology application.

  5. 3D printed magnetic polymer composite transformers

    Science.gov (United States)

    Bollig, Lindsey M.; Hilpisch, Peter J.; Mowry, Greg S.; Nelson-Cheeseman, Brittany B.

    2017-11-01

    The possibility of 3D printing a transformer core using fused deposition modeling methods is explored. With the use of additive manufacturing, ideal transformer core geometries can be achieved in order to produce a more efficient transformer. In this work, different 3D printed settings and toroidal geometries are tested using a custom integrated magnetic circuit capable of measuring the hysteresis loop of a transformer. These different properties are then characterized, and it was determined the most effective 3D printed transformer core requires a high fill factor along with a high concentration of magnetic particulate.

  6. An Improved Version of TOPAZ 3D

    International Nuclear Information System (INIS)

    Krasnykh, Anatoly

    2003-01-01

    An improved version of the TOPAZ 3D gun code is presented as a powerful tool for beam optics simulation. In contrast to the previous version of TOPAZ 3D, the geometry of the device under test is introduced into TOPAZ 3D directly from a CAD program, such as Solid Edge or AutoCAD. In order to have this new feature, an interface was developed, using the GiD software package as a meshing code. The article describes this method with two models to illustrate the results

  7. 3D face modeling, analysis and recognition

    CERN Document Server

    Daoudi, Mohamed; Veltkamp, Remco

    2013-01-01

    3D Face Modeling, Analysis and Recognition presents methodologies for analyzing shapes of facial surfaces, develops computational tools for analyzing 3D face data, and illustrates them using state-of-the-art applications. The methodologies chosen are based on efficient representations, metrics, comparisons, and classifications of features that are especially relevant in the context of 3D measurements of human faces. These frameworks have a long-term utility in face analysis, taking into account the anticipated improvements in data collection, data storage, processing speeds, and application s

  8. 3D background aerodynamics using CFD

    DEFF Research Database (Denmark)

    Sørensen, Niels N.

    2002-01-01

    3D rotor computations for the Greek Geovilogiki (GEO) 44 meter rotor equipped with 19 meters blades are performed. The lift and drag polars are extracted at five spanvise locations r/R= (.37, .55, .71, .82, .93) based on identification of stagnationpoints between 2D and 3D computations. The inner...... most sections shows clear evidence of 3D radial pumping, with increased lift compared to 2D values. In contrast to earlier investigated airfoils a very limited impact on the drag values are observed....

  9. FUN3D Manual: 13.3

    Science.gov (United States)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2018-01-01

    This manual describes the installation and execution of FUN3D version 13.3, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  10. FUN3D Manual: 12.8

    Science.gov (United States)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.8, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  11. FUN3D Manual: 13.1

    Science.gov (United States)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2017-01-01

    This manual describes the installation and execution of FUN3D version 13.1, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  12. FUN3D Manual: 13.2

    Science.gov (United States)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2017-01-01

    This manual describes the installation and execution of FUN3D version 13.2, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  13. FUN3D Manual: 12.9

    Science.gov (United States)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2016-01-01

    This manual describes the installation and execution of FUN3D version 12.9, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  14. FUN3D Manual: 13.0

    Science.gov (United States)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bill; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2016-01-01

    This manual describes the installation and execution of FUN3D version 13.0, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  15. FUN3D Manual: 12.7

    Science.gov (United States)

    Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; hide

    2015-01-01

    This manual describes the installation and execution of FUN3D version 12.7, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.

  16. Determination of the 3d34d and 3d35s configurations of Fe V

    International Nuclear Information System (INIS)

    Azarov, V.I.

    2001-01-01

    The analysis of the spectrum of four times ionized iron, Fe V, has led to the determination of the 3d 3 4d and 3d 3 5s configurations. From 975 classified lines in the region 645-1190 A we have established 123 of 168 theoretically possible 3d 3 4d levels and 26 of 38 possible 3d 3 5s levels. The estimated accuracy of values of energy levels of these two configurations is about 0.7 cm -1 and 1.0 cm -1 , respectively. The level structure of the system of the 3d 4 , 3d 3 4s, 3d 3 4d and 3d 3 5s configurations has been theoretically interpreted and the energy parameters have been determined by a least squares fit to the observed levels. A comparison of parameters in Cr III and Fe V ions is given. (orig.)

  17. 3D object-oriented image analysis in 3D geophysical modelling

    DEFF Research Database (Denmark)

    Fadel, I.; van der Meijde, M.; Kerle, N.

    2015-01-01

    Non-uniqueness of satellite gravity interpretation has traditionally been reduced by using a priori information from seismic tomography models. This reduction in the non-uniqueness has been based on velocity-density conversion formulas or user interpretation of the 3D subsurface structures (objects......) based on the seismic tomography models and then forward modelling these objects. However, this form of object-based approach has been done without a standardized methodology on how to extract the subsurface structures from the 3D models. In this research, a 3D object-oriented image analysis (3D OOA......) approach was implemented to extract the 3D subsurface structures from geophysical data. The approach was applied on a 3D shear wave seismic tomography model of the central part of the East African Rift System. Subsequently, the extracted 3D objects from the tomography model were reconstructed in the 3D...

  18. Hardware-accelerated autostereogram rendering for interactive 3D visualization

    Science.gov (United States)

    Petz, Christoph; Goldluecke, Bastian; Magnor, Marcus

    2003-05-01

    Single Image Random Dot Stereograms (SIRDS) are an attractive way of depicting three-dimensional objects using conventional display technology. Once trained in decoupling the eyes' convergence and focusing, autostereograms of this kind are able to convey the three-dimensional impression of a scene. We present in this work an algorithm that generates SIRDS at interactive frame rates on a conventional PC. The presented system allows rotating a 3D geometry model and observing the object from arbitrary positions in real-time. Subjective tests show that the perception of a moving or rotating 3D scene presents no problem: The gaze remains focused onto the object. In contrast to conventional SIRDS algorithms, we render multiple pixels in a single step using a texture-based approach, exploiting the parallel-processing architecture of modern graphics hardware. A vertex program determines the parallax for each vertex of the geometry model, and the graphics hardware's texture unit is used to render the dot pattern. No data has to be transferred between main memory and the graphics card for generating the autostereograms, leaving CPU capacity available for other tasks. Frame rates of 25 fps are attained at a resolution of 1024x512 pixels on a standard PC using a consumer-grade nVidia GeForce4 graphics card, demonstrating the real-time capability of the system.

  19. Do-It-Yourself: 3D Models of Hydrogenic Orbitals through 3D Printing

    Science.gov (United States)

    Griffith, Kaitlyn M.; de Cataldo, Riccardo; Fogarty, Keir H.

    2016-01-01

    Introductory chemistry students often have difficulty visualizing the 3-dimensional shapes of the hydrogenic electron orbitals without the aid of physical 3D models. Unfortunately, commercially available models can be quite expensive. 3D printing offers a solution for producing models of hydrogenic orbitals. 3D printing technology is widely…

  20. 3D-modeling and 3D-printing explorations on Japanese tea ceremony utensils

    NARCIS (Netherlands)

    Levy, P.D.; Yamada, Shigeru

    2017-01-01

    In this paper, we inquire aesthetical aspects of the Japanese tea ceremony, described as the aesthetics in the imperfection, based on novel fabrication technologies: 3D-modeling and 3D-printing. To do so, 3D-printed utensils (chashaku and chasen) were iteratively designed for the ceremony and were