WorldWideScience

Sample records for multi-camera realtime 3d

  1. REAL-TIME CAMERA GUIDANCE FOR 3D SCENE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    F. Schindler

    2012-07-01

    Full Text Available We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  2. High-precision real-time 3D shape measurement based on a quad-camera system

    Science.gov (United States)

    Tao, Tianyang; Chen, Qian; Feng, Shijie; Hu, Yan; Zhang, Minliang; Zuo, Chao

    2018-01-01

    Phase-shifting profilometry (PSP) based 3D shape measurement is well established in various applications due to its high accuracy, simple implementation, and robustness to environmental illumination and surface texture. In PSP, higher depth resolution generally requires higher fringe density of projected patterns which, in turn, lead to severe phase ambiguities that must be solved with additional information from phase coding and/or geometric constraints. However, in order to guarantee the reliability of phase unwrapping, available techniques are usually accompanied by increased number of patterns, reduced amplitude of fringe, and complicated post-processing algorithms. In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefiting from the position-optimized quad-camera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that in a large measurement volume of 200 mm × 200 mm × 400 mm, the resultant dynamic 3D sensing system can realize real-time 3D reconstruction at 60 frames per second with a depth precision of 50 μm.

  3. A multi-frequency electrical impedance tomography system for real-time 2D and 3D imaging

    Science.gov (United States)

    Yang, Yunjie; Jia, Jiabin

    2017-08-01

    This paper presents the design and evaluation of a configurable, fast multi-frequency Electrical Impedance Tomography (mfEIT) system for real-time 2D and 3D imaging, particularly for biomedical imaging. The system integrates 32 electrode interfaces and the current frequency ranges from 10 kHz to 1 MHz. The system incorporates the following novel features. First, a fully adjustable multi-frequency current source with current monitoring function is designed. Second, a flexible switching scheme is developed for arbitrary sensing configuration and a semi-parallel data acquisition architecture is implemented for high-frame-rate data acquisition. Furthermore, multi-frequency digital quadrature demodulation is accomplished in a high-capacity Field Programmable Gate Array. At last, a 3D imaging software, visual tomography, is developed for real-time 2D and 3D image reconstruction, data analysis, and visualization. The mfEIT system is systematically tested and evaluated from the aspects of signal to noise ratio (SNR), frame rate, and 2D and 3D multi-frequency phantom imaging. The highest SNR is 82.82 dB on a 16-electrode sensor. The frame rate is up to 546 fps at serial mode and 1014 fps at semi-parallel mode. The evaluation results indicate that the presented mfEIT system is a powerful tool for real-time 2D and 3D imaging.

  4. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    Science.gov (United States)

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  5. Real-time multi-camera video acquisition and processing platform for ADAS

    Science.gov (United States)

    Saponara, Sergio

    2016-04-01

    The paper presents the design of a real-time and low-cost embedded system for image acquisition and processing in Advanced Driver Assisted Systems (ADAS). The system adopts a multi-camera architecture to provide a panoramic view of the objects surrounding the vehicle. Fish-eye lenses are used to achieve a large Field of View (FOV). Since they introduce radial distortion of the images projected on the sensors, a real-time algorithm for their correction is also implemented in a pre-processor. An FPGA-based hardware implementation, re-using IP macrocells for several ADAS algorithms, allows for real-time processing of input streams from VGA automotive CMOS cameras.

  6. PRIMAS: a real-time 3D motion-analysis system

    Science.gov (United States)

    Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans

    1994-03-01

    The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.

  7. Integration of real-time 3D capture, reconstruction, and light-field display

    Science.gov (United States)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  8. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    Science.gov (United States)

    Thoeni, K.; Giacomini, A.; Murtagh, R.; Kniest, E.

    2014-06-01

    This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS). Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp), iPhone 4S (8 Mp), Panasonic Lumix LX5 (9.5 Mp), Panasonic Lumix ZS20 (14.1 Mp) and Canon EOS 7D (18 Mp). The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  9. A comparison of multi-view 3D reconstruction of a rock wall using several cameras and a laser scanner

    Directory of Open Access Journals (Sweden)

    K. Thoeni

    2014-06-01

    Full Text Available This work presents a comparative study between multi-view 3D reconstruction using various digital cameras and a terrestrial laser scanner (TLS. Five different digital cameras were used in order to estimate the limits related to the camera type and to establish the minimum camera requirements to obtain comparable results to the ones of the TLS. The cameras used for this study range from commercial grade to professional grade and included a GoPro Hero 1080 (5 Mp, iPhone 4S (8 Mp, Panasonic Lumix LX5 (9.5 Mp, Panasonic Lumix ZS20 (14.1 Mp and Canon EOS 7D (18 Mp. The TLS used for this work was a FARO Focus 3D laser scanner with a range accuracy of ±2 mm. The study area is a small rock wall of about 6 m height and 20 m length. The wall is partly smooth with some evident geological features, such as non-persistent joints and sharp edges. Eight control points were placed on the wall and their coordinates were measured by using a total station. These coordinates were then used to georeference all models. A similar number of images was acquired from a distance of between approximately 5 to 10 m, depending on field of view of each camera. The commercial software package PhotoScan was used to process the images, georeference and scale the models, and to generate the dense point clouds. Finally, the open-source package CloudCompare was used to assess the accuracy of the multi-view results. Each point cloud obtained from a specific camera was compared to the point cloud obtained with the TLS. The latter is taken as ground truth. The result is a coloured point cloud for each camera showing the deviation in relation to the TLS data. The main goal of this study is to quantify the quality of the multi-view 3D reconstruction results obtained with various cameras as objectively as possible and to evaluate its applicability to geotechnical problems.

  10. A multi-camera system for real-time pose estimation

    Science.gov (United States)

    Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin

    2007-04-01

    This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.

  11. MonoSLAM: real-time single camera SLAM.

    Science.gov (United States)

    Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier

    2007-06-01

    We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.

  12. CamOn: A Real-Time Autonomous Camera Control System

    DEFF Research Database (Denmark)

    Burelli, Paolo; Jhala, Arnav Harish

    2009-01-01

    This demonstration presents CamOn, an autonomous cam- era control system for real-time 3D games. CamOn employs multiple Artificial Potential Fields (APFs), a robot motion planning technique, to control both the location and orienta- tion of the camera. Scene geometry from the 3D environment...... contributes to the potential field that is used to determine po- sition and movement of the camera. Composition constraints for the camera are modelled as potential fields for controlling the view target of the camera. CamOn combines the compositional benefits of constraint- based camera systems, and improves...

  13. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    Science.gov (United States)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  14. A real-time 3D scanning system for pavement distortion inspection

    International Nuclear Information System (INIS)

    Li, Qingguang; Yao, Ming; Yao, Xun; Xu, Bugao

    2010-01-01

    Pavement distortions, such as rutting and shoving, are the common pavement distress problems that need to be inspected and repaired in a timely manner to ensure ride quality and traffic safety. This paper introduces a real-time, low-cost inspection system devoted to detecting these distress features using high-speed 3D transverse scanning techniques. The detection principle is the dynamic generation and characterization of the 3D pavement profile based on structured light triangulation. To improve the accuracy of the system, a multi-view coplanar scheme is employed in the calibration procedure so that more feature points can be used and distributed across the field of view of the camera. A sub-pixel line extraction method is applied for the laser stripe location, which includes filtering, edge detection and spline interpolation. The pavement transverse profile is then generated from the laser stripe curve and approximated by line segments. The second-order derivatives of the segment endpoints are used to identify the feature points of possible distortions. The system can output the real-time measurements and 3D visualization of rutting and shoving distress in a scanned pavement

  15. Towards next generation 3D cameras

    Science.gov (United States)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (robotic inspection and assembly systems.

  16. Automatic multi-camera calibration for deployable positioning systems

    Science.gov (United States)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  17. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    Directory of Open Access Journals (Sweden)

    Yu Zhang

    2016-06-01

    Full Text Available For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP system combining Multi-View Stereovision (MVS with the Structure from Motion (SfM algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98 and 0.57 mm (R2 = 0.99, respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency.

  18. Estimating 3D Leaf and Stem Shape of Nursery Paprika Plants by a Novel Multi-Camera Photography System

    Science.gov (United States)

    Zhang, Yu; Teng, Poching; Shimizu, Yo; Hosoi, Fumiki; Omasa, Kenji

    2016-01-01

    For plant breeding and growth monitoring, accurate measurements of plant structure parameters are very crucial. We have, therefore, developed a high efficiency Multi-Camera Photography (MCP) system combining Multi-View Stereovision (MVS) with the Structure from Motion (SfM) algorithm. In this paper, we measured six variables of nursery paprika plants and investigated the accuracy of 3D models reconstructed from photos taken by four lens types at four different positions. The results demonstrated that error between the estimated and measured values was small, and the root-mean-square errors (RMSE) for leaf width/length and stem height/diameter were 1.65 mm (R2 = 0.98) and 0.57 mm (R2 = 0.99), respectively. The accuracies of the 3D model reconstruction of leaf and stem by a 28-mm lens at the first and third camera positions were the highest, and the number of reconstructed fine-scale 3D model shape surfaces of leaf and stem is the most. The results confirmed the practicability of our new method for the reconstruction of fine-scale plant model and accurate estimation of the plant parameters. They also displayed that our system is a good system for capturing high-resolution 3D images of nursery plants with high efficiency. PMID:27314348

  19. Development of compact Compton camera for 3D image reconstruction of radioactive contamination

    Science.gov (United States)

    Sato, Y.; Terasaka, Y.; Ozawa, S.; Nakamura Miyamura, H.; Kaburagi, M.; Tanifuji, Y.; Kawabata, K.; Torii, T.

    2017-11-01

    The Fukushima Daiichi Nuclear Power Station (FDNPS), operated by Tokyo Electric Power Company Holdings, Inc., went into meltdown after the large tsunami caused by the Great East Japan Earthquake of March 11, 2011. Very large amounts of radionuclides were released from the damaged plant. Radiation distribution measurements inside FDNPS buildings are indispensable to execute decommissioning tasks in the reactor buildings. We have developed a compact Compton camera to measure the distribution of radioactive contamination inside the FDNPS buildings three-dimensionally (3D). The total weight of the Compton camera is lower than 1.0 kg. The gamma-ray sensor of the Compton camera employs Ce-doped GAGG (Gd3Al2Ga3O12) scintillators coupled with a multi-pixel photon counter. Angular correction of the detection efficiency of the Compton camera was conducted. Moreover, we developed a 3D back-projection method using the multi-angle data measured with the Compton camera. We successfully observed 3D radiation images resulting from the two 137Cs radioactive sources, and the image of the 9.2 MBq source appeared stronger than that of the 2.7 MBq source.

  20. Robust Curb Detection with Fusion of 3D-Lidar and Camera Data

    Directory of Open Access Journals (Sweden)

    Jun Tan

    2014-05-01

    Full Text Available Curb detection is an essential component of Autonomous Land Vehicles (ALV, especially important for safe driving in urban environments. In this paper, we propose a fusion-based curb detection method through exploiting 3D-Lidar and camera data. More specifically, we first fuse the sparse 3D-Lidar points and high-resolution camera images together to recover a dense depth image of the captured scene. Based on the recovered dense depth image, we propose a filter-based method to estimate the normal direction within the image. Then, by using the multi-scale normal patterns based on the curb’s geometric property, curb point features fitting the patterns are detected in the normal image row by row. After that, we construct a Markov Chain to model the consistency of curb points which utilizes the continuous property of the curb, and thus the optimal curb path which links the curb points together can be efficiently estimated by dynamic programming. Finally, we perform post-processing operations to filter the outliers, parameterize the curbs and give the confidence scores on the detected curbs. Extensive evaluations clearly show that our proposed method can detect curbs with strong robustness at real-time speed for both static and dynamic scenes.

  1. Real-time vehicle matching for multi-camera tunnel surveillance

    Science.gov (United States)

    Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried

    2011-03-01

    Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.

  2. Autonomous and 3D real-time multi-beam manipulation in a microfluidic environment

    DEFF Research Database (Denmark)

    Perch-Nielsen, I.; Rodrigo, P.J.; Alonzo, C.A.

    2006-01-01

    The Generalized Phase Contrast (GPC) method of optical 3D manipulation has previously been used for controlled spatial manipulation of live biological specimen in real-time. These biological experiments were carried out over a time-span of several hours while an operator intermittently optimized...... the optical system. Here we present GPC-based optical micromanipulation in a microfluidic system where trapping experiments are computer-automated and thereby capable of running with only limited supervision. The system is able to dynamically detect living yeast cells using a computer-interfaced CCD camera......, and respond to this by instantly creating traps at positions of the spotted cells streaming at flow velocities that would be difficult for a human operator to handle. With the added ability to control flow rates, experiments were also carried out to confirm the theoretically predicted axially dependent...

  3. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    Science.gov (United States)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  4. Real-Time 3d Reconstruction from Images Taken from AN Uav

    Science.gov (United States)

    Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.

    2015-08-01

    We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.

  5. Single Camera Calibration in 3D Vision

    Directory of Open Access Journals (Sweden)

    Caius SULIMAN

    2009-12-01

    Full Text Available Camera calibration is a necessary step in 3D vision in order to extract metric information from 2D images. A camera is considered to be calibrated when the parameters of the camera are known (i.e. principal distance, lens distorsion, focal length etc.. In this paper we deal with a single camera calibration method and with the help of this method we try to find the intrinsic and extrinsic camera parameters. The method was implemented with succes in the programming and simulation environment Matlab.

  6. Real-time tracking for virtual environments using scaat kalman filtering and unsynchronised cameras

    DEFF Research Database (Denmark)

    Rasmussen, Niels Tjørnly; Störring, Morritz; Moeslund, Thomas B.

    2006-01-01

    This paper presents a real-time outside-in camera-based tracking system for wireless 3D pose tracking of a user’s head and hand in a virtual environment. The system uses four unsynchronised cameras as sensors and passive retroreflective markers arranged in rigid bodies as targets. In order to ach...

  7. Synchronized 2D/3D optical mapping for interactive exploration and real-time visualization of multi-function neurological images.

    Science.gov (United States)

    Zhang, Qi; Alexander, Murray; Ryner, Lawrence

    2013-01-01

    Efficient software with the ability to display multiple neurological image datasets simultaneously with full real-time interactivity is critical for brain disease diagnosis and image-guided planning. In this paper, we describe the creation and function of a new comprehensive software platform that integrates novel algorithms and functions for multiple medical image visualization, processing, and manipulation. We implement an opacity-adjustment algorithm to build 2D lookup tables for multiple slice image display and fusion, which achieves a better visual result than those of using VTK-based methods. We also develop a new real-time 2D and 3D data synchronization scheme for multi-function MR volume and slice image optical mapping and rendering simultaneously through using the same adjustment operation. All these methodologies are integrated into our software framework to provide users with an efficient tool for flexibly, intuitively, and rapidly exploring and analyzing the functional and anatomical MR neurological data. Finally, we validate our new techniques and software platform with visual analysis and task-specific user studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. A Quality Evaluation of Single and Multiple Camera Calibration Approaches for an Indoor Multi Camera Tracking System

    Directory of Open Access Journals (Sweden)

    M. Adduci

    2014-06-01

    Full Text Available Human detection and tracking has been a prominent research area for several scientists around the globe. State of the art algorithms have been implemented, refined and accelerated to significantly improve the detection rate and eliminate false positives. While 2D approaches are well investigated, 3D human detection and tracking is still an unexplored research field. In both 2D/3D cases, introducing a multi camera system could vastly expand the accuracy and confidence of the tracking process. Within this work, a quality evaluation is performed on a multi RGB-D camera indoor tracking system for examining how camera calibration and pose can affect the quality of human tracks in the scene, independently from the detection and tracking approach used. After performing a calibration step on every Kinect sensor, state of the art single camera pose estimators were evaluated for checking how good the quality of the poses is estimated using planar objects such as an ordinate chessboard. With this information, a bundle block adjustment and ICP were performed for verifying the accuracy of the single pose estimators in a multi camera configuration system. Results have shown that single camera estimators provide high accuracy results of less than half a pixel forcing the bundle to converge after very few iterations. In relation to ICP, relative information between cloud pairs is more or less preserved giving a low score of fitting between concatenated pairs. Finally, sensor calibration proved to be an essential step for achieving maximum accuracy in the generated point clouds, and therefore in the accuracy of the produced 3D trajectories, from each sensor.

  9. Collaborative 3D Target Tracking in Distributed Smart Camera Networks for Wide-Area Surveillance

    Directory of Open Access Journals (Sweden)

    Xenofon Koutsoukos

    2013-05-01

    Full Text Available With the evolution and fusion of wireless sensor network and embedded camera technologies, distributed smart camera networks have emerged as a new class of systems for wide-area surveillance applications. Wireless networks, however, introduce a number of constraints to the system that need to be considered, notably the communication bandwidth constraints. Existing approaches for target tracking using a camera network typically utilize target handover mechanisms between cameras, or combine results from 2D trackers in each camera into 3D target estimation. Such approaches suffer from scale selection, target rotation, and occlusion, drawbacks typically associated with 2D tracking. In this paper, we present an approach for tracking multiple targets directly in 3D space using a network of smart cameras. The approach employs multi-view histograms to characterize targets in 3D space using color and texture as the visual features. The visual features from each camera along with the target models are used in a probabilistic tracker to estimate the target state. We introduce four variations of our base tracker that incur different computational and communication costs on each node and result in different tracking accuracy. We demonstrate the effectiveness of our proposed trackers by comparing their performance to a 3D tracker that fuses the results of independent 2D trackers. We also present performance analysis of the base tracker along Quality-of-Service (QoS and Quality-of-Information (QoI metrics, and study QoS vs. QoI trade-offs between the proposed tracker variations. Finally, we demonstrate our tracker in a real-life scenario using a camera network deployed in a building.

  10. Multi-camera and structured-light vision system (MSVS) for dynamic high-accuracy 3D measurements of railway tunnels.

    Science.gov (United States)

    Zhan, Dong; Yu, Long; Xiao, Jian; Chen, Tanglong

    2015-04-14

    Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV) of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS). First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  11. Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.

    Science.gov (United States)

    Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L

    2015-06-01

    Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.

  12. An active robot vision system for real-time 3-D structure recovery

    Energy Technology Data Exchange (ETDEWEB)

    Juvin, D. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d`Electronique et d`Instrumentation Nucleaire; Boukir, S.; Chaumette, F.; Bouthemy, P. [Rennes-1 Univ., 35 (France)

    1993-10-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up.

  13. An active robot vision system for real-time 3-D structure recovery

    International Nuclear Information System (INIS)

    Juvin, D.

    1993-01-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up

  14. Multi-Camera and Structured-Light Vision System (MSVS for Dynamic High-Accuracy 3D Measurements of Railway Tunnels

    Directory of Open Access Journals (Sweden)

    Dong Zhan

    2015-04-01

    Full Text Available Railway tunnel 3D clearance inspection is critical to guaranteeing railway operation safety. However, it is a challenge to inspect railway tunnel 3D clearance using a vision system, because both the spatial range and field of view (FOV of such measurements are quite large. This paper summarizes our work on dynamic railway tunnel 3D clearance inspection based on a multi-camera and structured-light vision system (MSVS. First, the configuration of the MSVS is described. Then, the global calibration for the MSVS is discussed in detail. The onboard vision system is mounted on a dedicated vehicle and is expected to suffer from multiple degrees of freedom vibrations brought about by the running vehicle. Any small vibration can result in substantial measurement errors. In order to overcome this problem, a vehicle motion deviation rectifying method is investigated. Experiments using the vision inspection system are conducted with satisfactory online measurement results.

  15. Design and implementation of real-time multi-sensor vision systems

    CERN Document Server

    Popovic, Vladan; Cogal, Ömer; Akin, Abdulkadir; Leblebici, Yusuf

    2017-01-01

    This book discusses the design of multi-camera systems and their application to fields such as the virtual reality, gaming, film industry, medicine, automotive industry, drones, etc.The authors cover the basics of image formation, algorithms for stitching a panoramic image from multiple cameras, and multiple real-time hardware system architectures, in order to have panoramic videos. Several specific applications of multi-camera systems are presented, such as depth estimation, high dynamic range imaging, and medical imaging.

  16. Novel, full 3D scintillation dosimetry using a static plenoptic camera

    Science.gov (United States)

    Goulet, Mathieu; Rilling, Madison; Gingras, Luc; Beddar, Sam; Beaulieu, Luc; Archambault, Louis

    2014-01-01

    Purpose: Patient-specific quality assurance (QA) of dynamic radiotherapy delivery would gain from being performed using a 3D dosimeter. However, 3D dosimeters, such as gels, have many disadvantages limiting to quality assurance, such as tedious read-out procedures and poor reproducibility. The purpose of this work is to develop and validate a novel type of high resolution 3D dosimeter based on the real-time light acquisition of a plastic scintillator volume using a plenoptic camera. This dosimeter would allow for the QA of dynamic radiation therapy techniques such as intensity-modulated radiation therapy (IMRT) or volumetric-modulated arc therapy (VMAT). Methods: A Raytrix R5 plenoptic camera was used to image a 10 × 10 × 10 cm3 EJ-260 plastic scintillator embedded inside an acrylic phantom at a rate of one acquisition per second. The scintillator volume was irradiated with both an IMRT and VMAT treatment plan on a Clinac iX linear accelerator. The 3D light distribution emitted by the scintillator volume was reconstructed at a 2 mm resolution in all dimensions by back-projecting the light collected by each pixel of the light-field camera using an iterative reconstruction algorithm. The latter was constrained by a beam's eye view projection of the incident dose acquired using the portal imager integrated with the linac and by physical consideration of the dose behavior as a function of depth in the phantom. Results: The absolute dose difference between the reconstructed 3D dose and the expected dose calculated using the treatment planning software Pinnacle3 was on average below 1.5% of the maximum dose for both integrated IMRT and VMAT deliveries, and below 3% for each individual IMRT incidences. Dose agreement between the reconstructed 3D dose and a radiochromic film acquisition in the same experimental phantom was on average within 2.1% and 1.2% of the maximum recorded dose for the IMRT and VMAT delivery, respectively. Conclusions: Using plenoptic camera

  17. Real-Time 3D Face Acquisition Using Reconfigurable Hybrid Architecture

    Directory of Open Access Journals (Sweden)

    Mitéran Johel

    2007-01-01

    Full Text Available Acquiring 3D data of human face is a general problem which can be applied in face recognition, virtual reality, and many other applications. It can be solved using stereovision. This technique consists in acquiring data in three dimensions from two cameras. The aim is to implement an algorithmic chain which makes it possible to obtain a three-dimensional space from two two-dimensional spaces: two images coming from the two cameras. Several implementations have already been considered. We propose a new simple real-time implementation based on a hybrid architecture (FPGA-DSP, allowing to consider an embedded and reconfigurable processing. Then we show our method which provides depth map of face, dense and reliable, and which can be implemented on an embedded architecture. A various architecture study led us to a judicious choice allowing to obtain the desired result. The real-time data processing is implemented in an embedded architecture. We obtain a dense face disparity map, precise enough for considered applications (multimedia, virtual worlds, biometrics and using a reliable method.

  18. 3D for the people: multi-camera motion capture in the field with consumer-grade cameras and open source software

    Directory of Open Access Journals (Sweden)

    Brandon E. Jackson

    2016-09-01

    Full Text Available Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts.

  19. Real-time 3D-surface-guided head refixation useful for fractionated stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Li Shidong; Liu Dezhi; Yin Gongjie; Zhuang Ping; Geng, Jason

    2006-01-01

    Accurate and precise head refixation in fractionated stereotactic radiotherapy has been achieved through alignment of real-time 3D-surface images with a reference surface image. The reference surface image is either a 3D optical surface image taken at simulation with the desired treatment position, or a CT/MRI-surface rendering in the treatment plan with corrections for patient motion during CT/MRI scans and partial volume effects. The real-time 3D surface images are rapidly captured by using a 3D video camera mounted on the ceiling of the treatment vault. Any facial expression such as mouth opening that affects surface shape and location can be avoided using a new facial monitoring technique. The image artifacts on the real-time surface can generally be removed by setting a threshold of jumps at the neighboring points while preserving detailed features of the surface of interest. Such a real-time surface image, registered in the treatment machine coordinate system, provides a reliable representation of the patient head position during the treatment. A fast automatic alignment between the real-time surface and the reference surface using a modified iterative-closest-point method leads to an efficient and robust surface-guided target refixation. Experimental and clinical results demonstrate the excellent efficacy of <2 min set-up time, the desired accuracy and precision of <1 mm in isocenter shifts, and <1 deg. in rotation

  20. Global Calibration of Multi-Cameras Based on Refractive Projection and Ray Tracing

    Directory of Open Access Journals (Sweden)

    Mingchi Feng

    2017-10-01

    Full Text Available Multi-camera systems are widely applied in the three dimensional (3D computer vision, especially when multiple cameras are distributed on both sides of the measured object. The calibration methods of multi-camera systems are critical to the accuracy of vision measurement and the key is to find an appropriate calibration target. In this paper, a high-precision camera calibration method for multi-camera systems based on transparent glass checkerboards and ray tracing is described, and is used to calibrate multiple cameras distributed on both sides of the glass checkerboard. Firstly, the intrinsic parameters of each camera are obtained by Zhang’s calibration method. Then, multiple cameras capture several images from the front and back of the glass checkerboard with different orientations, and all images contain distinct grid corners. As the cameras on one side are not affected by the refraction of glass checkerboard, extrinsic parameters can be directly calculated. However, the cameras on the other side are influenced by the refraction of glass checkerboard, and the direct use of projection model will produce a calibration error. A multi-camera calibration method using refractive projection model and ray tracing is developed to eliminate this error. Furthermore, both synthetic and real data are employed to validate the proposed approach. The experimental results of refractive calibration show that the error of the 3D reconstruction is smaller than 0.2 mm, the relative errors of both rotation and translation are less than 0.014%, and the mean and standard deviation of reprojection error of the four-camera system are 0.00007 and 0.4543 pixels, respectively. The proposed method is flexible, highly accurate, and simple to carry out.

  1. Realistic 3D Terrain Roaming and Real-Time Flight Simulation

    Science.gov (United States)

    Que, Xiang; Liu, Gang; He, Zhenwen; Qi, Guang

    2014-12-01

    This paper presents an integrate method, which can provide access to current status and the dynamic visible scanning topography, to enhance the interactive during the terrain roaming and real-time flight simulation. A digital elevation model and digital ortho-photo map data integrated algorithm is proposed as the base algorithm for our approach to build a realistic 3D terrain scene. A new technique with help of render to texture and head of display for generating the navigation pane is used. In the flight simulating, in order to eliminate flying "jump", we employs the multidimensional linear interpolation method to adjust the camera parameters dynamically and steadily. Meanwhile, based on the principle of scanning laser imaging, we draw pseudo color figures by scanning topography in different directions according to the real-time flying status. Simulation results demonstrate that the proposed algorithm is prospective for applications and the method can improve the effect and enhance dynamic interaction during the real-time flight.

  2. Real-time 3D human capture system for mixed-reality art and entertainment.

    Science.gov (United States)

    Nguyen, Ta Huynh Duy; Qui, Tran Cong Thien; Xu, Ke; Cheok, Adrian David; Teo, Sze Lee; Zhou, ZhiYing; Mallawaarachchi, Asitha; Lee, Shang Ping; Liu, Wei; Teo, Hui Siang; Thang, Le Nam; Li, Yu; Kato, Hirokazu

    2005-01-01

    A real-time system for capturing humans in 3D and placing them into a mixed reality environment is presented in this paper. The subject is captured by nine cameras surrounding her. Looking through a head-mounted-display with a camera in front pointing at a marker, the user can see the 3D image of this subject overlaid onto a mixed reality scene. The 3D images of the subject viewed from this viewpoint are constructed using a robust and fast shape-from-silhouette algorithm. The paper also presents several techniques to produce good quality and speed up the whole system. The frame rate of our system is around 25 fps using only standard Intel processor-based personal computers. Besides a remote live 3D conferencing and collaborating system, we also describe an application of the system in art and entertainment, named Magic Land, which is a mixed reality environment where captured avatars of human and 3D computer generated virtual animations can form an interactive story and play with each other. This system demonstrates many technologies in human computer interaction: mixed reality, tangible interaction, and 3D communication. The result of the user study not only emphasizes the benefits, but also addresses some issues of these technologies.

  3. 3D Modeling from Multi-views Images for Cultural Heritage in Wat-Pho, Thailand

    Science.gov (United States)

    Soontranon, N.; Srestasathiern, P.; Lawawirojwong, S.

    2015-08-01

    In Thailand, there are several types of (tangible) cultural heritages. This work focuses on 3D modeling of the heritage objects from multi-views images. The images are acquired by using a DSLR camera which costs around 1,500 (camera and lens). Comparing with a 3D laser scanner, the camera is cheaper and lighter than the 3D scanner. Hence, the camera is available for public users and convenient for accessing narrow areas. The acquired images consist of various sculptures and architectures in Wat-Pho which is a Buddhist temple located behind the Grand Palace (Bangkok, Thailand). Wat-Pho is known as temple of the reclining Buddha and the birthplace of traditional Thai massage. To compute the 3D models, a diagram is separated into following steps; Data acquisition, Image matching, Image calibration and orientation, Dense matching and Point cloud processing. For the initial work, small heritages less than 3 meters height are considered for the experimental results. A set of multi-views images of an interested object is used as input data for 3D modeling. In our experiments, 3D models are obtained from MICMAC (open source) software developed by IGN, France. The output of 3D models will be represented by using standard formats of 3D point clouds and triangulated surfaces such as .ply, .off, .obj, etc. To compute for the efficient 3D models, post-processing techniques are required for the final results e.g. noise reduction, surface simplification and reconstruction. The reconstructed 3D models can be provided for public access such as website, DVD, printed materials. The high accurate 3D models can also be used as reference data of the heritage objects that must be restored due to deterioration of a lifetime, natural disasters, etc.

  4. 4D ANIMATION RECONSTRUCTION FROM MULTI-CAMERA COORDINATES TRANSFORMATION

    Directory of Open Access Journals (Sweden)

    J. P. Jhan

    2016-06-01

    Full Text Available Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australis© coded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  5. SU-D-201-05: On the Automatic Recognition of Patient Safety Hazards in a Radiotherapy Setup Using a Novel 3D Camera System and a Deep Learning Framework

    Energy Technology Data Exchange (ETDEWEB)

    Santhanam, A; Min, Y; Beron, P; Agazaryan, N; Kupelian, P; Low, D [UCLA, Los Angeles, CA (United States)

    2016-06-15

    Purpose: Patient safety hazards such as a wrong patient/site getting treated can lead to catastrophic results. The purpose of this project is to automatically detect potential patient safety hazards during the radiotherapy setup and alert the therapist before the treatment is initiated. Methods: We employed a set of co-located and co-registered 3D cameras placed inside the treatment room. Each camera provided a point-cloud of fraxels (fragment pixels with 3D depth information). Each of the cameras were calibrated using a custom-built calibration target to provide 3D information with less than 2 mm error in the 500 mm neighborhood around the isocenter. To identify potential patient safety hazards, the treatment room components and the patient’s body needed to be identified and tracked in real-time. For feature recognition purposes, we used a graph-cut based feature recognition with principal component analysis (PCA) based feature-to-object correlation to segment the objects in real-time. Changes in the object’s position were tracked using the CamShift algorithm. The 3D object information was then stored for each classified object (e.g. gantry, couch). A deep learning framework was then used to analyze all the classified objects in both 2D and 3D and was then used to fine-tune a convolutional network for object recognition. The number of network layers were optimized to identify the tracked objects with >95% accuracy. Results: Our systematic analyses showed that, the system was effectively able to recognize wrong patient setups and wrong patient accessories. The combined usage of 2D camera information (color + depth) enabled a topology-preserving approach to verify patient safety hazards in an automatic manner and even in scenarios where the depth information is partially available. Conclusion: By utilizing the 3D cameras inside the treatment room and a deep learning based image classification, potential patient safety hazards can be effectively avoided.

  6. SU-D-201-05: On the Automatic Recognition of Patient Safety Hazards in a Radiotherapy Setup Using a Novel 3D Camera System and a Deep Learning Framework

    International Nuclear Information System (INIS)

    Santhanam, A; Min, Y; Beron, P; Agazaryan, N; Kupelian, P; Low, D

    2016-01-01

    Purpose: Patient safety hazards such as a wrong patient/site getting treated can lead to catastrophic results. The purpose of this project is to automatically detect potential patient safety hazards during the radiotherapy setup and alert the therapist before the treatment is initiated. Methods: We employed a set of co-located and co-registered 3D cameras placed inside the treatment room. Each camera provided a point-cloud of fraxels (fragment pixels with 3D depth information). Each of the cameras were calibrated using a custom-built calibration target to provide 3D information with less than 2 mm error in the 500 mm neighborhood around the isocenter. To identify potential patient safety hazards, the treatment room components and the patient’s body needed to be identified and tracked in real-time. For feature recognition purposes, we used a graph-cut based feature recognition with principal component analysis (PCA) based feature-to-object correlation to segment the objects in real-time. Changes in the object’s position were tracked using the CamShift algorithm. The 3D object information was then stored for each classified object (e.g. gantry, couch). A deep learning framework was then used to analyze all the classified objects in both 2D and 3D and was then used to fine-tune a convolutional network for object recognition. The number of network layers were optimized to identify the tracked objects with >95% accuracy. Results: Our systematic analyses showed that, the system was effectively able to recognize wrong patient setups and wrong patient accessories. The combined usage of 2D camera information (color + depth) enabled a topology-preserving approach to verify patient safety hazards in an automatic manner and even in scenarios where the depth information is partially available. Conclusion: By utilizing the 3D cameras inside the treatment room and a deep learning based image classification, potential patient safety hazards can be effectively avoided.

  7. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    Science.gov (United States)

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  8. Novel real-time 3D radiological mapping solution for ALARA maximization, D and D assessments and radiological management

    Energy Technology Data Exchange (ETDEWEB)

    Dubart, Philippe; Hautot, Felix [AREVA Group, 1 route de la Noue, Gif sur Yvette (France); Morichi, Massimo; Abou-Khalil, Roger [AREVA Group, Tour AREVA-1, place Jean Millier, Paris (France)

    2015-07-01

    Good management of dismantling and decontamination (D and D) operations and activities is requiring safety, time saving and perfect radiological knowledge of the contaminated environment as well as optimization for personnel dose and minimization of waste volume. In the same time, Fukushima accident has imposed a stretch to the nuclear measurement operational approach requiring in such emergency situation: fast deployment and intervention, quick analysis and fast scenario definition. AREVA, as return of experience from his activities carried out at Fukushima and D and D sites has developed a novel multi-sensor solution as part of his D and D research, approach and method, a system with real-time 3D photo-realistic spatial radiation distribution cartography of contaminated premises. The system may be hand-held or mounted on a mobile device (robot, drone, e.g). In this paper, we will present our current development based on a SLAM technology (Simultaneous Localization And Mapping) and integrated sensors and detectors allowing simultaneous topographic and radiological (dose rate and/or spectroscopy) data acquisitions. This enabling technology permits 3D gamma activity cartography in real-time. (authors)

  9. Novel real-time 3D radiological mapping solution for ALARA maximization, D and D assessments and radiological management

    International Nuclear Information System (INIS)

    Dubart, Philippe; Hautot, Felix; Morichi, Massimo; Abou-Khalil, Roger

    2015-01-01

    Good management of dismantling and decontamination (D and D) operations and activities is requiring safety, time saving and perfect radiological knowledge of the contaminated environment as well as optimization for personnel dose and minimization of waste volume. In the same time, Fukushima accident has imposed a stretch to the nuclear measurement operational approach requiring in such emergency situation: fast deployment and intervention, quick analysis and fast scenario definition. AREVA, as return of experience from his activities carried out at Fukushima and D and D sites has developed a novel multi-sensor solution as part of his D and D research, approach and method, a system with real-time 3D photo-realistic spatial radiation distribution cartography of contaminated premises. The system may be hand-held or mounted on a mobile device (robot, drone, e.g). In this paper, we will present our current development based on a SLAM technology (Simultaneous Localization And Mapping) and integrated sensors and detectors allowing simultaneous topographic and radiological (dose rate and/or spectroscopy) data acquisitions. This enabling technology permits 3D gamma activity cartography in real-time. (authors)

  10. Multiple-aperture optical design for micro-level cameras using 3D-printing method

    Science.gov (United States)

    Peng, Wei-Jei; Hsu, Wei-Yao; Cheng, Yuan-Chieh; Lin, Wen-Lung; Yu, Zong-Ru; Chou, Hsiao-Yu; Chen, Fong-Zhi; Fu, Chien-Chung; Wu, Chong-Syuan; Huang, Chao-Tsung

    2018-02-01

    The design of the ultra miniaturized camera using 3D-printing technology directly printed on to the complementary metal-oxide semiconductor (CMOS) imaging sensor is presented in this paper. The 3D printed micro-optics is manufactured using the femtosecond two-photon direct laser writing, and the figure error which could achieve submicron accuracy is suitable for the optical system. Because the size of the micro-level camera is approximately several hundreds of micrometers, the resolution is reduced much and highly limited by the Nyquist frequency of the pixel pitch. For improving the reduced resolution, one single-lens can be replaced by multiple-aperture lenses with dissimilar field of view (FOV), and then stitching sub-images with different FOV can achieve a high resolution within the central region of the image. The reason is that the angular resolution of the lens with smaller FOV is higher than that with larger FOV, and then the angular resolution of the central area can be several times than that of the outer area after stitching. For the same image circle, the image quality of the central area of the multi-lens system is significantly superior to that of a single-lens. The foveated image using stitching FOV breaks the limitation of the resolution for the ultra miniaturized imaging system, and then it can be applied such as biomedical endoscopy, optical sensing, and machine vision, et al. In this study, the ultra miniaturized camera with multi-aperture optics is designed and simulated for the optimum optical performance.

  11. Real-Time Hand Posture Recognition Using a Range Camera

    Science.gov (United States)

    Lahamy, Herve

    The basic goal of human computer interaction is to improve the interaction between users and computers by making computers more usable and receptive to the user's needs. Within this context, the use of hand postures in replacement of traditional devices such as keyboards, mice and joysticks is being explored by many researchers. The goal is to interpret human postures via mathematical algorithms. Hand posture recognition has gained popularity in recent years, and could become the future tool for humans to interact with computers or virtual environments. An exhaustive description of the frequently used methods available in literature for hand posture recognition is provided. It focuses on the different types of sensors and data used, the segmentation and tracking methods, the features used to represent the hand postures as well as the classifiers considered in the recognition process. Those methods are usually presented as highly robust with a recognition rate close to 100%. However, a couple of critical points necessary for a successful real-time hand posture recognition system require major improvement. Those points include the features used to represent the hand segment, the number of postures simultaneously recognizable, the invariance of the features with respect to rotation, translation and scale and also the behavior of the classifiers against non-perfect hand segments for example segments including part of the arm or missing part of the palm. A 3D time-of-flight camera named SR4000 has been chosen to develop a new methodology because of its capability to provide in real-time and at high frame rate 3D information on the scene imaged. This sensor has been described and evaluated for its capability for capturing in real-time a moving hand. A new recognition method that uses the 3D information provided by the range camera to recognize hand postures has been proposed. The different steps of this methodology including the segmentation, the tracking, the hand

  12. A ToF-Camera as a 3D Vision Sensor for Autonomous Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Sobers Lourdu Xavier Francis

    2015-11-01

    Full Text Available The aim of this paper is to deploy a time-of-flight (ToF based photonic mixer device (PMD camera on an Autonomous Ground Vehicle (AGV whose overall target is to traverse from one point to another in hazardous and hostile environments employing obstacle avoidance without human intervention. The hypothesized approach of applying a ToF Camera for an AGV is a suitable approach to autonomous robotics because, as the ToF camera can provide three-dimensional (3D information at a low computational cost, it is utilized to extract information about obstacles after their calibration and ground testing and is mounted and integrated with the Pioneer mobile robot. The workspace is a two-dimensional (2D world map which has been divided into a grid/cells, where the collision-free path defined by the graph search algorithm is a sequence of cells the AGV can traverse to reach the target. PMD depth data is used to populate traversable areas and obstacles by representing a grid/cells of suitable size. These camera data are converted into Cartesian coordinates for entry into a workspace grid map. A more optimal camera mounting angle is needed and adopted by analysing the camera's performance discrepancy, such as pixel detection, the detection rate and the maximum perceived distances, and infrared (IR scattering with respect to the ground surface. This mounting angle is recommended to be half the vertical field-of-view (FoV of the PMD camera. A series of still and moving tests are conducted on the AGV to verify correct sensor operations, which show that the postulated application of the ToF camera in the AGV is not straightforward. Later, to stabilize the moving PMD camera and to detect obstacles, a tracking feature detection algorithm and the scene flow technique are implemented to perform a real-time experiment.

  13. A real-time 3D end-to-end augmented reality system (and its representation transformations)

    Science.gov (United States)

    Tytgat, Donny; Aerts, Maarten; De Busser, Jeroen; Lievens, Sammy; Rondao Alface, Patrice; Macq, Jean-Francois

    2016-09-01

    The new generation of HMDs coming to the market is expected to enable many new applications that allow free viewpoint experiences with captured video objects. Current applications usually rely on 3D content that is manually created or captured in an offline manner. In contrast, this paper focuses on augmented reality applications that use live captured 3D objects while maintaining free viewpoint interaction. We present a system that allows live dynamic 3D objects (e.g. a person who is talking) to be captured in real-time. Real-time performance is achieved by traversing a number of representation formats and exploiting their specific benefits. For instance, depth images are maintained for fast neighborhood retrieval and occlusion determination, while implicit surfaces are used to facilitate multi-source aggregation for both geometry and texture. The result is a 3D reconstruction system that outputs multi-textured triangle meshes at real-time rates. An end-to-end system is presented that captures and reconstructs live 3D data and allows for this data to be used on a networked (AR) device. For allocating the different functional blocks onto the available physical devices, a number of alternatives are proposed considering the available computational power and bandwidth for each of the components. As we will show, the representation format can play an important role in this functional allocation and allows for a flexible system that can support a highly heterogeneous infrastructure.

  14. 3D Hand Gesture Analysis through a Real-Time Gesture Search Engine

    Directory of Open Access Journals (Sweden)

    Shahrouz Yousefi

    2015-06-01

    Full Text Available 3D gesture recognition and tracking are highly desired features of interaction design in future mobile and smart environments. Specifically, in virtual/augmented reality applications, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities such as touchscreens. In this paper, we introduce a novel solution for real-time 3D gesture-based interaction by finding the best match from an extremely large gesture database. This database includes images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique matching algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query frames and database and retrieving the best match. Once the best match is found from the database in each moment, the pre-recorded 3D motion parameters can instantly be used for natural interaction. The proposed bare-hand interaction technology performs in real time with high accuracy using an ordinary camera.

  15. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    Science.gov (United States)

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  16. Surveillance of a 2D Plane Area with 3D Deployed Cameras

    Directory of Open Access Journals (Sweden)

    Yi-Ge Fu

    2014-01-01

    Full Text Available As the use of camera networks has expanded, camera placement to satisfy some quality assurance parameters (such as a good coverage ratio, an acceptable resolution constraints, an acceptable cost as low as possible, etc. has become an important problem. The discrete camera deployment problem is NP-hard and many heuristic methods have been proposed to solve it, most of which make very simple assumptions. In this paper, we propose a probability inspired binary Particle Swarm Optimization (PI-BPSO algorithm to solve a homogeneous camera network placement problem. We model the problem under some more realistic assumptions: (1 deploy the cameras in the 3D space while the surveillance area is restricted to a 2D ground plane; (2 deploy the minimal number of cameras to get a maximum visual coverage under more constraints, such as field of view (FOV of the cameras and the minimum resolution constraints. We can simultaneously optimize the number and the configuration of the cameras through the introduction of a regulation item in the cost function. The simulation results showed the effectiveness of the proposed PI-BPSO algorithm.

  17. Multi-camera synchronization core implemented on USB3 based FPGA platform

    Science.gov (United States)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  18. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    Directory of Open Access Journals (Sweden)

    Chun-Tang Chao

    2016-03-01

    Full Text Available In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  19. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    Energy Technology Data Exchange (ETDEWEB)

    Darne, C; Robertson, D; Alsanea, F; Beddar, S [UT MD Anderson Cancer Center, Houston, TX (United States)

    2016-06-15

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirect scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.

  20. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    International Nuclear Information System (INIS)

    Darne, C; Robertson, D; Alsanea, F; Beddar, S

    2016-01-01

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm"3) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirect scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.

  1. User-assisted visual search and tracking across distributed multi-camera networks

    Science.gov (United States)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  2. Design of an Embedded Multi-Camera Vision System—A Case Study in Mobile Robotics

    Directory of Open Access Journals (Sweden)

    Valter Costa

    2018-02-01

    Full Text Available The purpose of this work is to explore the design principles for a Real-Time Robotic Multi Camera Vision System, in a case study involving a real world competition of autonomous driving. Design practices from vision and real-time research areas are applied into a Real-Time Robotic Vision application, thus exemplifying good algorithm design practices, the advantages of employing the “zero copy one pass” methodology and associated trade-offs leading to the selection of a controller platform. The vision tasks under study are: (i recognition of a “flat” signal; and (ii track following, requiring 3D reconstruction. This research firstly improves the used algorithms for the mentioned tasks and finally selects the controller hardware. Optimization for the shown algorithms yielded from 1.5 times to 190 times improvements, always with acceptable quality for the target application, with algorithm optimization being more important on lower computing power platforms. Results also include a 3-cm and five-degree accuracy for lane tracking and 100% accuracy for signalling panel recognition, which are better than most results found in the literature for this application. Clear results comparing different PC platforms for the mentioned Robotic Vision tasks are also shown, demonstrating trade-offs between accuracy and computing power, leading to the proper choice of control platform. The presented design principles are portable to other applications, where Real-Time constraints exist.

  3. TractorEYE: Vision-based Real-time Detection for Autonomous Vehicles in Agriculture

    DEFF Research Database (Denmark)

    Christiansen, Peter

    ) using a smaller memory footprint and 7.3-times faster processing. Low memory footprint and fast processing makes DeepAnomaly suitable for real-time applications running on an embedded GPU. FieldSAFE is a multi-modal dataset for detection of static and moving obstacles in agriculture. The dataset...... (four for rgb camera, one for thermal camera and one for a Multi-beam lidar) and fuse detection information in a common format using either 3D positions or Inverse Sensor Models. A GPU powered computational platform is able to run detection algorithms online. For the rgb camera, a deep learning...... algorithm is proposed DeepAnomaly to perform real-time anomaly detection of distant, heavy occluded and unknown obstacles in agriculture. DeepAnomaly is - compared to a state-of-the-art object detector Faster R-CNN - for an agricultural use-case able to detect humans better and at longer ranges (45-90m...

  4. 3D camera assisted fully automated calibration of scanning laser Doppler vibrometers

    International Nuclear Information System (INIS)

    Sels, Seppe; Ribbens, Bart; Mertens, Luc; Vanlanduit, Steve

    2016-01-01

    Scanning laser Doppler vibrometers (LDV) are used to measure full-field vibration shapes of products and structures. In most commercially available scanning laser Doppler vibrometer systems the user manually draws a grid of measurement locations on a 2D camera image of the product. The determination of the correct physical measurement locations can be a time consuming and diffcult task. In this paper we present a new methodology for product testing and quality control that integrates 3D imaging techniques with vibration measurements. This procedure allows to test prototypes in a shorter period because physical measurements locations will be located automatically. The proposed methodology uses a 3D time-of-flight camera to measure the location and orientation of the test-object. The 3D image of the time-of-flight camera is then matched with the 3D-CAD model of the object in which measurement locations are pre-defined. A time of flight camera operates strictly in the near infrared spectrum. To improve the signal to noise ratio in the time-of-flight measurement, a time-of-flight camera uses a band filter. As a result of this filter, the laser spot of most laser vibrometers is invisible in the time-of-flight image. Therefore a 2D RGB-camera is used to find the laser-spot of the vibrometer. The laser spot is matched to the 3D image obtained by the time-of-flight camera. Next an automatic calibration procedure is used to aim the laser at the (pre)defined locations. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. Secondly the orientation of the CAD model is known with respect to the laser beam. This information can be used to find the direction of the measured vibration relatively to the surface of the object. With this direction, the vibration measurements can be compared more precisely with numerical

  5. 3D camera assisted fully automated calibration of scanning laser Doppler vibrometers

    Energy Technology Data Exchange (ETDEWEB)

    Sels, Seppe, E-mail: Seppe.Sels@uantwerpen.be; Ribbens, Bart; Mertens, Luc; Vanlanduit, Steve [Op3Mech Research Group, University of Antwerp, Salesianenlaan 90, 2660 Antwerp (Belgium)

    2016-06-28

    Scanning laser Doppler vibrometers (LDV) are used to measure full-field vibration shapes of products and structures. In most commercially available scanning laser Doppler vibrometer systems the user manually draws a grid of measurement locations on a 2D camera image of the product. The determination of the correct physical measurement locations can be a time consuming and diffcult task. In this paper we present a new methodology for product testing and quality control that integrates 3D imaging techniques with vibration measurements. This procedure allows to test prototypes in a shorter period because physical measurements locations will be located automatically. The proposed methodology uses a 3D time-of-flight camera to measure the location and orientation of the test-object. The 3D image of the time-of-flight camera is then matched with the 3D-CAD model of the object in which measurement locations are pre-defined. A time of flight camera operates strictly in the near infrared spectrum. To improve the signal to noise ratio in the time-of-flight measurement, a time-of-flight camera uses a band filter. As a result of this filter, the laser spot of most laser vibrometers is invisible in the time-of-flight image. Therefore a 2D RGB-camera is used to find the laser-spot of the vibrometer. The laser spot is matched to the 3D image obtained by the time-of-flight camera. Next an automatic calibration procedure is used to aim the laser at the (pre)defined locations. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. Secondly the orientation of the CAD model is known with respect to the laser beam. This information can be used to find the direction of the measured vibration relatively to the surface of the object. With this direction, the vibration measurements can be compared more precisely with numerical

  6. Real-Time 3D Tracking and Reconstruction on Mobile Phones.

    Science.gov (United States)

    Prisacariu, Victor Adrian; Kähler, Olaf; Murray, David W; Reid, Ian D

    2015-05-01

    We present a novel framework for jointly tracking a camera in 3D and reconstructing the 3D model of an observed object. Due to the region based approach, our formulation can handle untextured objects, partial occlusions, motion blur, dynamic backgrounds and imperfect lighting. Our formulation also allows for a very efficient implementation which achieves real-time performance on a mobile phone, by running the pose estimation and the shape optimisation in parallel. We use a level set based pose estimation but completely avoid the, typically required, explicit computation of a global distance. This leads to tracking rates of more than 100 Hz on a desktop PC and 30 Hz on a mobile phone. Further, we incorporate additional orientation information from the phone's inertial sensor which helps us resolve the tracking ambiguities inherent to region based formulations. The reconstruction step first probabilistically integrates 2D image statistics from selected keyframes into a 3D volume, and then imposes coherency and compactness using a total variational regularisation term. The global optimum of the overall energy function is found using a continuous max-flow algorithm and we show that, similar to tracking, the integration of per voxel posteriors instead of likelihoods improves the precision and accuracy of the reconstruction.

  7. Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm

    Science.gov (United States)

    Gao, X.; Li, M.; Xing, L.; Liu, Y.

    2018-04-01

    Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.

  8. Full-parallax 3D display from stereo-hybrid 3D camera system

    Science.gov (United States)

    Hong, Seokmin; Ansari, Amir; Saavedra, Genaro; Martinez-Corral, Manuel

    2018-04-01

    In this paper, we propose an innovative approach for the production of the microimages ready to display onto an integral-imaging monitor. Our main contribution is using a stereo-hybrid 3D camera system, which is used for picking up a 3D data pair and composing a denser point cloud. However, there is an intrinsic difficulty in the fact that hybrid sensors have dissimilarities and therefore should be equalized. Handled data facilitate to generating an integral image after projecting computationally the information through a virtual pinhole array. We illustrate this procedure with some imaging experiments that provide microimages with enhanced quality. After projection of such microimages onto the integral-imaging monitor, 3D images are produced with great parallax and viewing angle.

  9. Evaluation of 3D reconstruction algorithms for a small animal PET camera

    International Nuclear Information System (INIS)

    Johnson, C.A.; Gandler, W.R.; Seidel, J.

    1996-01-01

    The use of paired, opposing position-sensitive phototube scintillation cameras (SCs) operating in coincidence for small animal imaging with positron emitters is currently under study. Because of the low sensitivity of the system even in 3D mode and the need to produce images with high resolution, it was postulated that a 3D expectation maximization (EM) reconstruction algorithm might be well suited for this application. We investigated four reconstruction algorithms for the 3D SC PET camera: 2D filtered back-projection (FBP), 2D ordered subset EM (OSEM), 3D reprojection (3DRP), and 3D OSEM. Noise was assessed for all slices by the coefficient of variation in a simulated uniform cylinder. Resolution was assessed from a simulation of 15 point sources in the warm background of the uniform cylinder. At comparable noise levels, the resolution achieved with OSEM (0.9-mm to 1.2-mm) is significantly better than that obtained with FBP or 3DRP (1.5-mm to 2.0-mm.) Images of a rat skull labeled with 18 F-fluoride suggest that 3D OSEM can improve image quality of a small animal PET camera

  10. 3D MODELLING BY LOW-COST RANGE CAMERA: SOFTWARE EVALUATION AND COMPARISON

    Directory of Open Access Journals (Sweden)

    R. Ravanelli

    2017-11-01

    Full Text Available The aim of this work is to present a comparison among three software applications currently available for the Occipital Structure SensorTM; all these software were developed for collecting 3D models of objects easily and in real-time with this structured light range camera. The SKANECT, itSeez3D and Scanner applications were thus tested: a DUPLOTM bricks construction was scanned with the three applications and the obtained models were compared to the model virtually generated with a standard CAD software, which served as reference. The results demonstrate that all the software applications are generally characterized by the same level of geometric accuracy, which amounts to very few millimetres. However, the itSeez3D software, which requires a payment of $7 to export each model, represents surely the best solution, both from the point of view of the geometric accuracy and, mostly, at the level of the color restitution. On the other hand, Scanner, which is a free software, presents an accuracy comparable to that of itSeez3D. At the same time, though, the colors are often smoothed and not perfectly overlapped to the corresponding part of the model. Lastly, SKANECT is the software that generates the highest number of points, but it has also some issues with the rendering of the colors.

  11. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  12. A NEW AUTOMATIC SYSTEM CALIBRATION OF MULTI-CAMERAS AND LIDAR SENSORS

    Directory of Open Access Journals (Sweden)

    M. Hassanein

    2016-06-01

    Full Text Available In the last few years, multi-cameras and LIDAR systems draw the attention of the mapping community. They have been deployed on different mobile mapping platforms. The different uses of these platforms, especially the UAVs, offered new applications and developments which require fast and accurate results. The successful calibration of such systems is a key factor to achieve accurate results and for the successful processing of the system measurements especially with the different types of measurements provided by the LIDAR and the cameras. The system calibration aims to estimate the geometric relationships between the different system components. A number of applications require the systems be ready for operation in a short time especially for disasters monitoring applications. Also, many of the present system calibration techniques are constrained with the need of special arrangements in labs for the calibration procedures. In this paper, a new technique for calibration of integrated LIDAR and multi-cameras systems is presented. The new proposed technique offers a calibration solution that overcomes the need for special labs for standard calibration procedures. In the proposed technique, 3D reconstruction of automatically detected and matched image points is used to generate a sparse images-driven point cloud then, a registration between the LIDAR generated 3D point cloud and the images-driven 3D point takes place to estimate the geometric relationships between the cameras and the LIDAR.. In the presented technique a simple 3D artificial target is used to simplify the lab requirements for the calibration procedure. The used target is composed of three intersected plates. The choice of such target geometry was to ensure enough conditions for the convergence of registration between the constructed 3D point clouds from the two systems. The achieved results of the proposed approach prove its ability to provide an adequate and fully automated

  13. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    Science.gov (United States)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  14. An Embedded Real-Time Red Peach Detection System Based on an OV7670 Camera, ARM Cortex-M4 Processor and 3D Look-Up Tables

    Directory of Open Access Journals (Sweden)

    Marcel Tresanchez

    2012-10-01

    Full Text Available This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6 processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  15. An embedded real-time red peach detection system based on an OV7670 camera, ARM cortex-M4 processor and 3D look-up tables.

    Science.gov (United States)

    Teixidó, Mercè; Font, Davinia; Pallejà, Tomàs; Tresanchez, Marcel; Nogués, Miquel; Palacín, Jordi

    2012-10-22

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future robotized harvesting arm. The embedded system will be able to perform real-time fruit detection and tracking by using a three-dimensional look-up-table (LUT) defined in the RGB color space and optimized for fruit picking. Additionally, two different methodologies for creating optimized 3D LUTs based on existing linear color models and fruit histograms were implemented in this work and compared for the case of red peaches. The resulting system is able to acquire general and zoomed orchard images and to update the relative tracking information of a red peach in the tree ten times per second.

  16. Feasibility of the integration of CRONOS, a 3-D neutronics code, into real-time simulators

    International Nuclear Information System (INIS)

    Ragusa, J.C.

    2001-01-01

    In its effort to contribute to nuclear power plant safety, CEA proposes the integration of an engineering grade 3-D neutronics code into a real-time plant analyser. This paper describes the capabilities of the neutronics code CRONOS to achieve a fast running performance. First, we will present current core models in simulators and explain their drawbacks. Secondly, the mean features of CRONOS's spatial-kinetics methods will be reviewed. We will then present an optimum core representation with respect to mesh size, choice of finite elements (FE) basis and execution time, for accurate results as well as the multi 1-D thermal-hydraulics (T/H) model developed to take into account 3-D effects in updating the cross-sections. A Main Steam Line Break (MSLB) End-of-Life (EOL) Hot-Zero-Power (HZP) accident will be used as an example, before we conclude with the perspectives of integrating CRONOS's 3-D core model into real-time simulators. (author)

  17. Feasibility of the integration of CRONOS, a 3-D neutronics code, into real-time simulators

    Energy Technology Data Exchange (ETDEWEB)

    Ragusa, J.C. [CEA Saclay, Dept. de Mecanique et de Technologie, 91 - Gif-sur-Yvette (France)

    2001-07-01

    In its effort to contribute to nuclear power plant safety, CEA proposes the integration of an engineering grade 3-D neutronics code into a real-time plant analyser. This paper describes the capabilities of the neutronics code CRONOS to achieve a fast running performance. First, we will present current core models in simulators and explain their drawbacks. Secondly, the mean features of CRONOS's spatial-kinetics methods will be reviewed. We will then present an optimum core representation with respect to mesh size, choice of finite elements (FE) basis and execution time, for accurate results as well as the multi 1-D thermal-hydraulics (T/H) model developed to take into account 3-D effects in updating the cross-sections. A Main Steam Line Break (MSLB) End-of-Life (EOL) Hot-Zero-Power (HZP) accident will be used as an example, before we conclude with the perspectives of integrating CRONOS's 3-D core model into real-time simulators. (author)

  18. Sensor Fusion of Cameras and a Laser for City-Scale 3D Reconstruction

    Directory of Open Access Journals (Sweden)

    Yunsu Bok

    2014-11-01

    Full Text Available This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  19. 3D RECONSTRUCTION OF AN UNDERWATER ARCHAELOGICAL SITE: COMPARISON BETWEEN LOW COST CAMERAS

    Directory of Open Access Journals (Sweden)

    A. Capra

    2015-04-01

    Full Text Available The 3D reconstruction with a metric content of a submerged area, where objects and structures of archaeological interest are found, could play an important role in the research and study activities and even in the digitization of the cultural heritage. The reconstruction of 3D object, of interest for archaeologists, constitutes a starting point in the classification and description of object in digital format and for successive fruition by user after delivering through several media. The starting point is a metric evaluation of the site obtained with photogrammetric surveying and appropriate 3D restitution. The authors have been applying the underwater photogrammetric technique since several years using underwater digital cameras and, in this paper, digital low cost cameras (off-the-shelf. Results of tests made on submerged objects with three cameras are presented: © Canon Power Shot G12, © Intova Sport HD e © GoPro HERO 2. The experimentation had the goal to evaluate the precision in self-calibration procedures, essential for multimedia underwater photogrammetry, and to analyze the quality of 3D restitution. Precisions obtained in the calibration and orientation procedures was assessed by using three cameras, and an homogeneous set control points. Data were processed with © Agisoft Photoscan. Successively, 3D models were created and the comparison of the models derived from the use of different cameras was performed. Different potentialities of the used cameras are reported in the discussion section. The 3D restitution of objects and structures was integrated with sea bottom floor morphology in order to achieve a comprehensive description of the site. A possible methodology of survey and representation of submerged objects is therefore illustrated, considering an automatic and a semi-automatic approach.

  20. SU-F-BRB-05: Collision Avoidance Mapping Using Consumer 3D Camera

    Energy Technology Data Exchange (ETDEWEB)

    Cardan, R; Popple, R [Univ Alabama Birmingham, Birmingham, AL (United States)

    2015-06-15

    Purpose: To develop a fast and economical method of scanning a patient’s full body contour for use in collision avoidance mapping without the use of ionizing radiation. Methods: Two consumer level 3D cameras used in electronic gaming were placed in a CT simulator room to scan a phantom patient set up in a high collision probability position. A registration pattern and computer vision algorithms were used to transform the scan into the appropriate coordinate systems. The cameras were then used to scan the surface of a gantry in the treatment vault. Each scan was converted into a polygon mesh for collision testing in a general purpose polygon interference algorithm. All clinically relevant transforms were applied to the gantry and patient support to create a map of all possible collisions. The map was then tested for accuracy by physically testing the collisions with the phantom in the vault. Results: The scanning fidelity of both the gantry and patient was sufficient to produce a collision prediction accuracy of 97.1% with 64620 geometry states tested in 11.5 s. The total scanning time including computation, transformation, and generation was 22.3 seconds. Conclusion: Our results demonstrate an economical system to generate collision avoidance maps. Future work includes testing the speed of the framework in real-time collision avoidance scenarios. Research partially supported by a grant from Varian Medical Systems.

  1. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    International Nuclear Information System (INIS)

    Anderson, Robert J.

    2014-01-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  2. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  3. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, Robert J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Robotic and Security Systems Dept.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, more monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.

  4. Making 3D movies of Northern Lights

    Science.gov (United States)

    Hivon, Eric; Mouette, Jean; Legault, Thierry

    2017-10-01

    We describe the steps necessary to create three-dimensional (3D) movies of Northern Lights or Aurorae Borealis out of real-time images taken with two distant high-resolution fish-eye cameras. Astrometric reconstruction of the visible stars is used to model the optical mapping of each camera and correct for it in order to properly align the two sets of images. Examples of the resulting movies can be seen at http://www.iap.fr/aurora3d

  5. Real-time multiple human perception with color-depth cameras on a mobile robot.

    Science.gov (United States)

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an

  6. User interface using a 3D model for video surveillance

    Science.gov (United States)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  7. Validation and Assessment of Multi-GNSS Real-Time Precise Point Positioning in Simulated Kinematic Mode Using IGS Real-Time Service

    Directory of Open Access Journals (Sweden)

    Liang Wang

    2018-02-01

    Full Text Available Precise Point Positioning (PPP is a popular technology for precise applications based on the Global Navigation Satellite System (GNSS. Multi-GNSS combined PPP has become a hot topic in recent years with the development of multiple GNSSs. Meanwhile, with the operation of the real-time service (RTS of the International GNSS Service (IGS agency that provides satellite orbit and clock corrections to broadcast ephemeris, it is possible to obtain the real-time precise products of satellite orbits and clocks and to conduct real-time PPP. In this contribution, the real-time multi-GNSS orbit and clock corrections of the CLK93 product are applied for real-time multi-GNSS PPP processing, and its orbit and clock qualities are investigated, first with a seven-day experiment by comparing them with the final multi-GNSS precise product ‘GBM’ from GFZ. Then, an experiment involving real-time PPP processing for three stations in the Multi-GNSS Experiment (MGEX network with a testing period of two weeks is conducted in order to evaluate the convergence performance of real-time PPP in a simulated kinematic mode. The experimental result shows that real-time PPP can achieve a convergence performance of less than 15 min for an accuracy level of 20 cm. Finally, the real-time data streams from 12 globally distributed IGS/MGEX stations for one month are used to assess and validate the positioning accuracy of real-time multi-GNSS PPP. The results show that the simulated kinematic positioning accuracy achieved by real-time PPP on different stations is about 3.0 to 4.0 cm for the horizontal direction and 5.0 to 7.0 cm for the three-dimensional (3D direction.

  8. Assessing the Potential of Low-Cost 3D Cameras for the Rapid Measurement of Plant Woody Structure

    Directory of Open Access Journals (Sweden)

    Charles Nock

    2013-11-01

    Full Text Available Detailed 3D plant architectural data have numerous applications in plant science, but many existing approaches for 3D data collection are time-consuming and/or require costly equipment. Recently, there has been rapid growth in the availability of low-cost, 3D cameras and related open source software applications. 3D cameras may provide measurements of key components of plant architecture such as stem diameters and lengths, however, few tests of 3D cameras for the measurement of plant architecture have been conducted. Here, we measured Salix branch segments ranging from 2–13 mm in diameter with an Asus Xtion camera to quantify the limits and accuracy of branch diameter measurement with a 3D camera. By scanning at a variety of distances we also quantified the effect of scanning distance. In addition, we also test the sensitivity of the program KinFu for continuous 3D object scanning and modeling as well as other similar software to accurately record stem diameters and capture plant form (<3 m in height. Given its ability to accurately capture the diameter of branches >6 mm, Asus Xtion may provide a novel method for the collection of 3D data on the branching architecture of woody plants. Improvements in camera measurement accuracy and available software are likely to further improve the utility of 3D cameras for plant sciences in the future.

  9. IMPLEMENTATION OF A REAL-TIME STACKING ALGORITHM IN A PHOTOGRAMMETRIC DIGITAL CAMERA FOR UAVS

    Directory of Open Access Journals (Sweden)

    A. Audi

    2017-08-01

    Full Text Available In the recent years, unmanned aerial vehicles (UAVs have become an interesting tool in aerial photography and photogrammetry activities. In this context, some applications (like cloudy sky surveys, narrow-spectral imagery and night-vision imagery need a longexposure time where one of the main problems is the motion blur caused by the erratic camera movements during image acquisition. This paper describes an automatic real-time stacking algorithm which produces a high photogrammetric quality final composite image with an equivalent long-exposure time using several images acquired with short-exposure times. Our method is inspired by feature-based image registration technique. The algorithm is implemented on the light-weight IGN camera, which has an IMU sensor and a SoC/FPGA. To obtain the correct parameters for the resampling of images, the presented method accurately estimates the geometrical relation between the first and the Nth image, taking into account the internal parameters and the distortion of the camera. Features are detected in the first image by the FAST detector, than homologous points on other images are obtained by template matching aided by the IMU sensors. The SoC/FPGA in the camera is used to speed up time-consuming parts of the algorithm such as features detection and images resampling in order to achieve a real-time performance as we want to write only the resulting final image to save bandwidth on the storage device. The paper includes a detailed description of the implemented algorithm, resource usage summary, resulting processing time, resulting images, as well as block diagrams of the described architecture. The resulting stacked image obtained on real surveys doesn’t seem visually impaired. Timing results demonstrate that our algorithm can be used in real-time since its processing time is less than the writing time of an image in the storage device. An interesting by-product of this algorithm is the 3D rotation

  10. Poster: A Software-Defined Multi-Camera Network

    OpenAIRE

    Chen, Po-Yen; Chen, Chien; Selvaraj, Parthiban; Claesen, Luc

    2016-01-01

    The widespread popularity of OpenFlow leads to a significant increase in the number of applications developed in SoftwareDefined Networking (SDN). In this work, we propose the architecture of a Software-Defined Multi-Camera Network consisting of small, flexible, economic, and programmable cameras which combine the functions of the processor, switch, and camera. A Software-Defined Multi-Camera Network can effectively reduce the overall network bandwidth and reduce a large amount of the Capex a...

  11. Multi-Angle Snowflake Camera Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Stuefer, Martin [Univ. of Alaska, Fairbanks, AK (United States); Bailey, J. [Univ. of Alaska, Fairbanks, AK (United States)

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASC cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.

  12. 3D-FPA Hybridization Improvements, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Advanced Scientific Concepts, Inc. (ASC) is a small business, which has developed a compact, eye-safe 3D Flash LIDARTM Camera (FLC) well suited for real-time...

  13. 3D display system using monocular multiview displays

    Science.gov (United States)

    Sakamoto, Kunio; Saruta, Kazuki; Takeda, Kazutoki

    2002-05-01

    A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have researched the virtual-reality systems connected with computer networks for real-time remote control and developed a low-priced real-time 3D display for building these systems. We developed a 3D HMD system using monocular multi-view displays. The 3D displaying technique of this monocular multi-view display is based on the concept of the super multi-view proposed by Kajiki at TAO (Telecommunications Advancement Organization of Japan) in 1996. Our 3D HMD has two monocular multi-view displays (used as a visual display unit) in order to display a picture to the left eye and the right eye. The left and right images are a pair of stereoscopic images for the left and right eyes, then stereoscopic 3D images are observed.

  14. OPTIMAL CAMERA NETWORK DESIGN FOR 3D MODELING OF CULTURAL HERITAGE

    Directory of Open Access Journals (Sweden)

    B. S. Alsadik

    2012-07-01

    Full Text Available Digital cultural heritage documentation in 3D is subject to research and practical applications nowadays. Image-based modeling is a technique to create 3D models, which starts with the basic task of designing the camera network. This task is – however – quite crucial in practical applications because it needs a thorough planning and a certain level of expertise and experience. Bearing in mind todays computational (mobile power we think that the optimal camera network should be designed in the field, and, therefore, making the preprocessing and planning dispensable. The optimal camera network is designed when certain accuracy demands are fulfilled with a reasonable effort, namely keeping the number of camera shots at a minimum. In this study, we report on the development of an automatic method to design the optimum camera network for a given object of interest, focusing currently on buildings and statues. Starting from a rough point cloud derived from a video stream of object images, the initial configuration of the camera network assuming a high-resolution state-of-the-art non-metric camera is designed. To improve the image coverage and accuracy, we use a mathematical penalty method of optimization with constraints. From the experimental test, we found that, after optimization, the maximum coverage is attained beside a significant improvement of positional accuracy. Currently, we are working on a guiding system, to ensure, that the operator actually takes the desired images. Further next steps will include a reliable and detailed modeling of the object applying sophisticated dense matching techniques.

  15. Multi Camera Multi Object Tracking using Block Search over Epipolar Geometry

    Directory of Open Access Journals (Sweden)

    Saman Sargolzaei

    2000-01-01

    Full Text Available We present strategy for multi-objects tracking in multi camera environment for the surveillance and security application where tracking multitude subjects are of utmost importance in a crowded scene. Our technique assumes partially overlapped multi-camera setup where cameras share common view from different angle to assess positions and activities of subjects under suspicion. To establish spatial correspondence between camera views we employ an epipolar geometry technique. We propose an overlapped block search method to find the interested pattern (target in new frames. Color pattern update scheme has been considered to further optimize the efficiency of the object tracking when object pattern changes due to object motion in the field of views of the cameras. Evaluation of our approach is presented with the results on PETS2007 dataset..

  16. Visual simultaneous localization and mapping (VSLAM) methods applied to indoor 3D topographical and radiological mapping in real-time

    International Nuclear Information System (INIS)

    Hautot, F.; Dubart, P.; Chagneau, B.; Bacri, C.O.; Abou-Khalil, R.

    2017-01-01

    New developments in the field of robotics and computer vision enable to merge sensors to allow fast real-time localization of radiological measurements in the space/volume with near real-time radioactive sources identification and characterization. These capabilities lead nuclear investigations to a more efficient way for operators' dosimetry evaluation, intervention scenarios and risks mitigation and simulations, such as accidents in unknown potentially contaminated areas or during dismantling operations. This paper will present new progresses in merging RGB-D camera based on SLAM (Simultaneous Localization and Mapping) systems and nuclear measurement in motion methods in order to detect, locate, and evaluate the activity of radioactive sources in 3-dimensions

  17. Real-time pedestrian detection with the videos of car camera

    Directory of Open Access Journals (Sweden)

    Yunling Zhang

    2015-12-01

    Full Text Available Pedestrians in the vehicle path are in danger of being hit, thus causing severe injury to pedestrians and vehicle occupants. Therefore, real-time pedestrian detection with the video of vehicle-mounted camera is of great significance to vehicle–pedestrian collision warning and traffic safety of self-driving car. In this article, a real-time scheme was proposed based on integral channel features and graphics processing unit. The proposed method does not need to resize the input image. Moreover, the computationally expensive convolution of the detectors and the input image was converted into the dot product of two larger matrixes, which can be computed effectively using a graphics processing unit. The experiments showed that the proposed method could be employed to detect pedestrians in the video of car camera at 20+ frames per second with acceptable error rates. Thus, it can be applied in real-time detection tasks with the videos of car camera.

  18. Precise real-time correction of Anger camera deadtime losses

    International Nuclear Information System (INIS)

    Woldeselassie, Tilahun

    2002-01-01

    An earlier paper dealt with modeling of the camera in terms of the resolving times, τ 0 and T, of the paralyzable detector and nonparalyzable computer system, respectively, for the case of a full energy window. A second paper presented a decaying source method for the accurate real-time measurement of these resolving times. The present paper first shows that the detector system can be treated as a single device with a resolving time τ 0 dependent on source distribution. It then discusses camera operation with an energy window, window fraction being f w =R p /R d ≤1, where R d and R p are the detector and pulse-height-analyzer (PHA) outputs, respectively. The detector resolving time is shown to vary with window fraction according to τ 0p =τ 0p /f w , while T is unaffected, so that operation may be paralyzable or nonparalyzable depending on window setting and the ratio k T =T/τ 0 . Regions of interest are described in terms of the ROI fraction, f r =R r /R≤1, and resolving time, τ 0r =τ 0p /f r , where R and R r are the recorded count rates for the field-of-view and the region-of-interest, respectively. As τ 0p and τ 0r are expected to vary with input rate, it is shown that these can be measured in real-time using the decaying source method. It is then shown that camera operation both with f w ≤1 and f r ≤1 can be described by the simple paralyzable equation r=ne -n , where n=N w τ 0p =N r τ 0r and r=R p τ 0p =R r τ 0r , N w , and N r being the input rates within the energy window and the region of interest, respectively. An analytical solution to the paralyzable equation is then presented, which enables the input rates N w =n/τ 0p and N r =n/τ 0r to be obtained correct to better than 0.52% all the way up to the peak response point of the camera

  19. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    Directory of Open Access Journals (Sweden)

    Yajie Liao

    2017-06-01

    Full Text Available Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices, which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration.

  20. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    Science.gov (United States)

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  1. APPLYING CCD CAMERAS IN STEREO PANORAMA SYSTEMS FOR 3D ENVIRONMENT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    A. Sh. Amini

    2012-07-01

    Full Text Available Proper recontruction of 3D environments is nowadays needed by many organizations and applications. In addition to conventional methods the use of stereo panoramas is an appropriate technique to use due to simplicity, low cost and the ability to view an environment the way it is in reality. This paper investigates the ability of applying stereo CCD cameras for 3D reconstruction and presentation of the environment and geometric measuring among that. For this purpose, a rotating stereo panorama was established using two CCDs with a base-length of 350 mm and a DVR (digital video recorder box. The stereo system was first calibrated using a 3D test-field and used to perform accurate measurements. The results of investigating the system in a real environment showed that although this kind of cameras produce noisy images and they do not have appropriate geometric stability, but they can be easily synchronized, well controlled and reasonable accuracy (about 40 mm in objects at 12 meters distance from the camera can be achieved.

  2. Handheld real-time volumetric 3-D gamma-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Haefner, Andrew, E-mail: ahaefner@lbl.gov [Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Barnowski, Ross [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720 (United States); Luke, Paul; Amman, Mark [Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Vetter, Kai [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720 (United States); Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States)

    2017-06-11

    This paper presents the concept of real-time fusion of gamma-ray imaging and visual scene data for a hand-held mobile Compton imaging system in 3-D. The ability to obtain and integrate both gamma-ray and scene data from a mobile platform enables improved capabilities in the localization and mapping of radioactive materials. This not only enhances the ability to localize these materials, but it also provides important contextual information of the scene which once acquired can be reviewed and further analyzed subsequently. To demonstrate these concepts, the high-efficiency multimode imager (HEMI) is used in a hand-portable implementation in combination with a Microsoft Kinect sensor. This sensor, in conjunction with open-source software, provides the ability to create a 3-D model of the scene and to track the position and orientation of HEMI in real-time. By combining the gamma-ray data and visual data, accurate 3-D maps of gamma-ray sources are produced in real-time. This approach is extended to map the location of radioactive materials within objects with unknown geometry.

  3. A real-time MTFC algorithm of space remote-sensing camera based on FPGA

    Science.gov (United States)

    Zhao, Liting; Huang, Gang; Lin, Zhe

    2018-01-01

    A real-time MTFC algorithm of space remote-sensing camera based on FPGA was designed. The algorithm can provide real-time image processing to enhance image clarity when the remote-sensing camera running on-orbit. The image restoration algorithm adopted modular design. The MTF measurement calculation module on-orbit had the function of calculating the edge extension function, line extension function, ESF difference operation, normalization MTF and MTFC parameters. The MTFC image filtering and noise suppression had the function of filtering algorithm and effectively suppressing the noise. The algorithm used System Generator to design the image processing algorithms to simplify the design structure of system and the process redesign. The image gray gradient dot sharpness edge contrast and median-high frequency were enhanced. The image SNR after recovery reduced less than 1 dB compared to the original image. The image restoration system can be widely used in various fields.

  4. Globally Consistent Indoor Mapping via a Decoupling Rotation and Translation Algorithm Applied to RGB-D Camera Output

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2017-10-01

    Full Text Available This paper presents a novel RGB-D 3D reconstruction algorithm for the indoor environment. The method can produce globally-consistent 3D maps for potential GIS applications. As the consumer RGB-D camera provides a noisy depth image, the proposed algorithm decouples the rotation and translation for a more robust camera pose estimation, which makes full use of the information, but also prevents inaccuracies caused by noisy depth measurements. The uncertainty in the image depth is not only related to the camera device, but also the environment; hence, a novel uncertainty model for depth measurements was developed using Gaussian mixture applied to multi-windows. The plane features in the indoor environment contain valuable information about the global structure, which can guide the convergence of camera pose solutions, and plane and feature point constraints are incorporated in the proposed optimization framework. The proposed method was validated using publicly-available RGB-D benchmarks and obtained good quality trajectory and 3D models, which are difficult for traditional 3D reconstruction algorithms.

  5. Augmented reality during robot-assisted laparoscopic partial nephrectomy: toward real-time 3D-CT to stereoscopic video registration.

    Science.gov (United States)

    Su, Li-Ming; Vagvolgyi, Balazs P; Agarwal, Rahul; Reiley, Carol E; Taylor, Russell H; Hager, Gregory D

    2009-04-01

    To investigate a markerless tracking system for real-time stereo-endoscopic visualization of preoperative computed tomographic imaging as an augmented display during robot-assisted laparoscopic partial nephrectomy. Stereoscopic video segments of a patient undergoing robot-assisted laparoscopic partial nephrectomy for tumor and another for a partial staghorn renal calculus were processed to evaluate the performance of a three-dimensional (3D)-to-3D registration algorithm. After both cases, we registered a segment of the video recording to the corresponding preoperative 3D-computed tomography image. After calibrating the camera and overlay, 3D-to-3D registration was created between the model and the surgical recording using a modified iterative closest point technique. Image-based tracking technology tracked selected fixed points on the kidney surface to augment the image-to-model registration. Our investigation has demonstrated that we can identify and track the kidney surface in real time when applied to intraoperative video recordings and overlay the 3D models of the kidney, tumor (or stone), and collecting system semitransparently. Using a basic computer research platform, we achieved an update rate of 10 Hz and an overlay latency of 4 frames. The accuracy of the 3D registration was 1 mm. Augmented reality overlay of reconstructed 3D-computed tomography images onto real-time stereo video footage is possible using iterative closest point and image-based surface tracking technology that does not use external navigation tracking systems or preplaced surface markers. Additional studies are needed to assess the precision and to achieve fully automated registration and display for intraoperative use.

  6. INVESTIGATION OF PARALLAX ISSUES FOR MULTI-LENS MULTISPECTRAL CAMERA BAND CO-REGISTRATION

    Directory of Open Access Journals (Sweden)

    J. P. Jhan

    2017-08-01

    Full Text Available The multi-lens multispectral cameras (MSCs, such as Micasense Rededge and Parrot Sequoia, can record multispectral information by each separated lenses. With their lightweight and small size, which making they are more suitable for mounting on an Unmanned Aerial System (UAS to collect high spatial images for vegetation investigation. However, due to the multi-sensor geometry of multi-lens structure induces significant band misregistration effects in original image, performing band co-registration is necessary in order to obtain accurate spectral information. A robust and adaptive band-to-band image transform (RABBIT is proposed to perform band co-registration of multi-lens MSCs. First is to obtain the camera rig information from camera system calibration, and utilizes the calibrated results for performing image transformation and lens distortion correction. Since the calibration uncertainty leads to different amount of systematic errors, the last step is to optimize the results in order to acquire a better co-registration accuracy. Due to the potential issues of parallax that will cause significant band misregistration effects when images are closer to the targets, four datasets thus acquired from Rededge and Sequoia were applied to evaluate the performance of RABBIT, including aerial and close-range imagery. From the results of aerial images, it shows that RABBIT can achieve sub-pixel accuracy level that is suitable for the band co-registration purpose of any multi-lens MSC. In addition, the results of close-range images also has same performance, if we focus on the band co-registration on specific target for 3D modelling, or when the target has equal distance to the camera.

  7. A real-time camera calibration system based on OpenCV

    Science.gov (United States)

    Zhang, Hui; Wang, Hua; Guo, Huinan; Ren, Long; Zhou, Zuofeng

    2015-07-01

    Camera calibration is one of the essential steps in the computer vision research. This paper describes a real-time OpenCV based camera calibration system, and developed and implemented in the VS2008 environment. Experimental results prove that the system to achieve a simple and fast camera calibration, compared with MATLAB, higher precision and does not need manual intervention, and can be widely used in various computer vision system.

  8. Fusing inertial sensor data in an extended Kalman filter for 3D camera tracking.

    Science.gov (United States)

    Erdem, Arif Tanju; Ercan, Ali Özer

    2015-02-01

    In a setup where camera measurements are used to estimate 3D egomotion in an extended Kalman filter (EKF) framework, it is well-known that inertial sensors (i.e., accelerometers and gyroscopes) are especially useful when the camera undergoes fast motion. Inertial sensor data can be fused at the EKF with the camera measurements in either the correction stage (as measurement inputs) or the prediction stage (as control inputs). In general, only one type of inertial sensor is employed in the EKF in the literature, or when both are employed they are both fused in the same stage. In this paper, we provide an extensive performance comparison of every possible combination of fusing accelerometer and gyroscope data as control or measurement inputs using the same data set collected at different motion speeds. In particular, we compare the performances of different approaches based on 3D pose errors, in addition to camera reprojection errors commonly found in the literature, which provides further insight into the strengths and weaknesses of different approaches. We show using both simulated and real data that it is always better to fuse both sensors in the measurement stage and that in particular, accelerometer helps more with the 3D position tracking accuracy, whereas gyroscope helps more with the 3D orientation tracking accuracy. We also propose a simulated data generation method, which is beneficial for the design and validation of tracking algorithms involving both camera and inertial measurement unit measurements in general.

  9. Compact 3D Camera for Shake-the-Box Particle Tracking

    Science.gov (United States)

    Hesseling, Christina; Michaelis, Dirk; Schneiders, Jan

    2017-11-01

    Time-resolved 3D-particle tracking usually requires the time-consuming optical setup and calibration of 3 to 4 cameras. Here, a compact four-camera housing has been developed. The performance of the system using Shake-the-Box processing (Schanz et al. 2016) is characterized. It is shown that the stereo-base is large enough for sensible 3D velocity measurements. Results from successful experiments in water flows using LED illumination are presented. For large-scale wind tunnel measurements, an even more compact version of the system is mounted on a robotic arm. Once calibrated for a specific measurement volume, the necessity for recalibration is eliminated even when the system moves around. Co-axial illumination is provided through an optical fiber in the middle of the housing, illuminating the full measurement volume from one viewing direction. Helium-filled soap bubbles are used to ensure sufficient particle image intensity. This way, the measurement probe can be moved around complex 3D-objects. By automatic scanning and stitching of recorded particle tracks, the detailed time-averaged flow field of a full volume of cubic meters in size is recorded and processed. Results from an experiment at TU-Delft of the flow field around a cyclist are shown.

  10. Towards real-time 3D ultrasound planning and personalized 3D printing for breast HDR brachytherapy treatment

    International Nuclear Information System (INIS)

    Poulin, Eric; Gardi, Lori; Fenster, Aaron; Pouliot, Jean; Beaulieu, Luc

    2015-01-01

    Two different end-to-end procedures were tested for real-time planning in breast HDR brachytherapy treatment. Both methods are using a 3D ultrasound (3DUS) system and a freehand catheter optimization algorithm. They were found fast and efficient. We demonstrated a proof-of-concept approach for personalized real-time guidance and planning to breast HDR brachytherapy treatments

  11. In-air versus underwater comparison of 3D reconstruction accuracy using action sport cameras.

    Science.gov (United States)

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2017-01-25

    Action sport cameras (ASC) have achieved a large consensus for recreational purposes due to ongoing cost decrease, image resolution and frame rate increase, along with plug-and-play usability. Consequently, they have been recently considered for sport gesture studies and quantitative athletic performance evaluation. In this paper, we evaluated the potential of two ASCs (GoPro Hero3+) for in-air (laboratory) and underwater (swimming pool) three-dimensional (3D) motion analysis as a function of different camera setups involving the acquisition frequency, image resolution and field of view. This is motivated by the fact that in swimming, movement cycles are characterized by underwater and in-air phases what imposes the technical challenge of having a split volume configuration: an underwater measurement volume observed by underwater cameras and an in-air measurement volume observed by in-air cameras. The reconstruction of whole swimming cycles requires thus merging of simultaneous measurements acquired in both volumes. Characterizing and optimizing the instrumental errors of such a configuration makes mandatory the assessment of the instrumental errors of both volumes. In order to calibrate the camera stereo pair, black spherical markers placed on two calibration tools, used both in-air and underwater, and a two-step nonlinear optimization were exploited. The 3D reconstruction accuracy of testing markers and the repeatability of the estimated camera parameters accounted for system performance. For both environments, statistical tests were focused on the comparison of the different camera configurations. Then, each camera configuration was compared across the two environments. In all assessed resolutions, and in both environments, the reconstruction error (true distance between the two testing markers) was less than 3mm and the error related to the working volume diagonal was in the range of 1:2000 (3×1.3×1.5m 3 ) to 1:7000 (4.5×2.2×1.5m 3 ) in agreement with the

  12. Geant4 simulation of a 3D high resolution gamma camera

    International Nuclear Information System (INIS)

    Akhdar, H.; Kezzar, K.; Aksouh, F.; Assemi, N.; AlGhamdi, S.; AlGarawi, M.; Gerl, J.

    2015-01-01

    The aim of this work is to develop a 3D gamma camera with high position resolution and sensitivity relying on both distance/absorption and Compton scattering techniques and without using any passive collimation. The proposed gamma camera is simulated in order to predict its performance using the full benefit of Geant4 features that allow the construction of the needed geometry of the detectors, have full control of the incident gamma particles and study the response of the detector in order to test the suggested geometries. Three different geometries are simulated and each configuration is tested with three different scintillation materials (LaBr3, LYSO and CeBr3)

  13. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    Science.gov (United States)

    Bernardina, Gustavo R D; Cerveri, Pietro; Barros, Ricardo M L; Marins, João C B; Silvatti, Amanda P

    2016-01-01

    Action sport cameras (ASC) are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D) motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels) were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720) and 1.5mm (1920×1080). The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  14. Action Sport Cameras as an Instrument to Perform a 3D Underwater Motion Analysis.

    Directory of Open Access Journals (Sweden)

    Gustavo R D Bernardina

    Full Text Available Action sport cameras (ASC are currently adopted mainly for entertainment purposes but their uninterrupted technical improvements, in correspondence of cost decreases, are going to disclose them for three-dimensional (3D motion analysis in sport gesture study and athletic performance evaluation quantitatively. Extending this technology to sport analysis however still requires a methodologic step-forward to making ASC a metric system, encompassing ad-hoc camera setup, image processing, feature tracking, calibration and 3D reconstruction. Despite traditional laboratory analysis, such requirements become an issue when coping with both indoor and outdoor motion acquisitions of athletes. In swimming analysis for example, the camera setup and the calibration protocol are particularly demanding since land and underwater cameras are mandatory. In particular, the underwater camera calibration can be an issue affecting the reconstruction accuracy. In this paper, the aim is to evaluate the feasibility of ASC for 3D underwater analysis by focusing on camera setup and data acquisition protocols. Two GoPro Hero3+ Black (frequency: 60Hz; image resolutions: 1280×720/1920×1080 pixels were located underwater into a swimming pool, surveying a working volume of about 6m3. A two-step custom calibration procedure, consisting in the acquisition of one static triad and one moving wand, carrying nine and one spherical passive markers, respectively, was implemented. After assessing camera parameters, a rigid bar, carrying two markers at known distance, was acquired in several positions within the working volume. The average error upon the reconstructed inter-marker distances was less than 2.5mm (1280×720 and 1.5mm (1920×1080. The results of this study demonstrate that the calibration of underwater ASC is feasible enabling quantitative kinematic measurements with accuracy comparable to traditional motion capture systems.

  15. YouDash3D: exploring stereoscopic 3D gaming for 3D movie theaters

    Science.gov (United States)

    Schild, Jonas; Seele, Sven; Masuch, Maic

    2012-03-01

    Along with the success of the digitally revived stereoscopic cinema, events beyond 3D movies become attractive for movie theater operators, i.e. interactive 3D games. In this paper, we present a case that explores possible challenges and solutions for interactive 3D games to be played by a movie theater audience. We analyze the setting and showcase current issues related to lighting and interaction. Our second focus is to provide gameplay mechanics that make special use of stereoscopy, especially depth-based game design. Based on these results, we present YouDash3D, a game prototype that explores public stereoscopic gameplay in a reduced kiosk setup. It features live 3D HD video stream of a professional stereo camera rig rendered in a real-time game scene. We use the effect to place the stereoscopic effigies of players into the digital game. The game showcases how stereoscopic vision can provide for a novel depth-based game mechanic. Projected trigger zones and distributed clusters of the audience video allow for easy adaptation to larger audiences and 3D movie theater gaming.

  16. Interaction Control Protocols for Distributed Multi-user Multi-camera Environments

    Directory of Open Access Journals (Sweden)

    Gareth W Daniel

    2003-10-01

    Full Text Available Video-centred communication (e.g., video conferencing, multimedia online learning, traffic monitoring, and surveillance is becoming a customary activity in our lives. The management of interactions in such an environment is a complicated HCI issue. In this paper, we present our study on a collection of interaction control protocols for distributed multiuser multi-camera environments. These protocols facilitate different approaches to managing a user's entitlement for controlling a particular camera. We describe a web-based system that allows multiple users to manipulate multiple cameras in varying remote locations. The system was developed using the Java framework, and all protocols discussed have been incorporated into the system. Experiments were designed and conducted to evaluate the effectiveness of these protocols, and to enable the identification of various human factors in a distributed multi-user and multi-camera environment. This work provides an insight into the complexity associated with the interaction management in video-centred communication. It can also serve as a conceptual and experimental framework for further research in this area.

  17. Hyper thin 3D edge measurement of honeycomb core structures based on the triangular camera-projector layout & phase-based stereo matching.

    Science.gov (United States)

    Jiang, Hongzhi; Zhao, Huijie; Li, Xudong; Quan, Chenggen

    2016-03-07

    We propose a novel hyper thin 3D edge measurement technique to measure the profile of 3D outer envelope of honeycomb core structures. The width of the edges of the honeycomb core is less than 0.1 mm. We introduce a triangular layout design consisting of two cameras and one projector to measure hyper thin 3D edges and eliminate data interference from the walls. A phase-shifting algorithm and the multi-frequency heterodyne phase-unwrapping principle are applied for phase retrievals on edges. A new stereo matching method based on phase mapping and epipolar constraint is presented to solve correspondence searching on the edges and remove false matches resulting in 3D outliers. Experimental results demonstrate the effectiveness of the proposed method for measuring the 3D profile of honeycomb core structures.

  18. 3D deblending of simultaneous source data based on 3D multi-scale shaping operator

    Science.gov (United States)

    Zu, Shaohuan; Zhou, Hui; Mao, Weijian; Gong, Fei; Huang, Weilin

    2018-04-01

    We propose an iterative three-dimensional (3D) deblending scheme using 3D multi-scale shaping operator to separate 3D simultaneous source data. The proposed scheme is based on the property that signal is coherent, whereas interference is incoherent in some domains, e.g., common receiver domain and common midpoint domain. In two-dimensional (2D) blended record, the coherency difference of signal and interference is in only one spatial direction. Compared with 2D deblending, the 3D deblending can take more sparse constraints into consideration to obtain better performance, e.g., in 3D common receiver gather, the coherency difference is in two spatial directions. Furthermore, with different levels of coherency, signal and interference distribute in different scale curvelet domains. In both 2D and 3D blended records, most coherent signal locates in coarse scale curvelet domain, while most incoherent interference distributes in fine scale curvelet domain. The scale difference is larger in 3D deblending, thus, we apply the multi-scale shaping scheme to further improve the 3D deblending performance. We evaluate the performance of 3D and 2D deblending with the multi-scale and global shaping operators, respectively. One synthetic and one field data examples demonstrate the advantage of the 3D deblending with 3D multi-scale shaping operator.

  19. Design of a Compton camera for 3D prompt-{gamma} imaging during ion beam therapy

    Energy Technology Data Exchange (ETDEWEB)

    Roellinghoff, F., E-mail: roelling@ipnl.in2p3.fr [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Richard, M.-H., E-mail: mrichard@ipnl.in2p3.fr [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Chevallier, M.; Constanzo, J.; Dauvergne, D. [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); Freud, N. [INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Henriquet, P.; Le Foulher, F. [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); Letang, J.M. [INSA-Lyon Laboratory of Nondestructive Testing using Ionizing Radiation (CNDRI), F-69621 Villeurbanne Cedex (France); Montarou, G. [LPC, CNRS/IN2P3, Clermont-F. University (France); Ray, C.; Testa, E.; Testa, M. [Universite de Lyon, F-69622 Lyon (France); Universite Lyon 1 and CNRS/IN2P3, UMR 5822, IPNL, F-69622 Villeurbanne (France); Walenta, A.H. [Uni-Siegen, FB Physik, Emmy-Noether Campus, D-57068 Siegen (Germany)

    2011-08-21

    We investigate, by means of Geant4 simulations, a real-time method to control the position of the Bragg peak during ion therapy, based on a Compton camera in combination with a beam tagging device (hodoscope) in order to detect the prompt gamma emitted during nuclear fragmentation. The proposed set-up consists of a stack of 2 mm thick silicon strip detectors and a LYSO absorber detector. The {gamma} emission points are reconstructed analytically by intersecting the ion trajectories given by the beam hodoscope and the Compton cones given by the camera. The camera response to a polychromatic point source in air is analyzed with regard to both spatial resolution and detection efficiency. Various geometrical configurations of the camera have been tested. In the proposed configuration, for a typical polychromatic photon point source, the spatial resolution of the camera is about 8.3 mm FWHM and the detection efficiency 2.5x10{sup -4} (reconstructable photons/emitted photons in 4{pi}). Finally, the clinical applicability of our system is considered and possible starting points for further developments of a prototype are discussed.

  20. An Alignment Method for the Integration of Underwater 3D Data Captured by a Stereovision System and an Acoustic Camera

    Directory of Open Access Journals (Sweden)

    Antonio Lagudi

    2016-04-01

    Full Text Available The integration of underwater 3D data captured by acoustic and optical systems is a promising technique in various applications such as mapping or vehicle navigation. It allows for compensating the drawbacks of the low resolution of acoustic sensors and the limitations of optical sensors in bad visibility conditions. Aligning these data is a challenging problem, as it is hard to make a point-to-point correspondence. This paper presents a multi-sensor registration for the automatic integration of 3D data acquired from a stereovision system and a 3D acoustic camera in close-range acquisition. An appropriate rig has been used in the laboratory tests to determine the relative position between the two sensor frames. The experimental results show that our alignment approach, based on the acquisition of a rig in several poses, can be adopted to estimate the rigid transformation between the two heterogeneous sensors. A first estimation of the unknown geometric transformation is obtained by a registration of the two 3D point clouds, but it ends up to be strongly affected by noise and data dispersion. A robust and optimal estimation is obtained by a statistical processing of the transformations computed for each pose. The effectiveness of the method has been demonstrated in this first experimentation of the proposed 3D opto-acoustic camera.

  1. Comparison of Three Smart Camera Architectures for Real-Time Machine Vision System

    Directory of Open Access Journals (Sweden)

    Abdul Waheed Malik

    2013-12-01

    Full Text Available This paper presents a machine vision system for real-time computation of distance and angle of a camera from a set of reference points located on a target board. Three different smart camera architectures were explored to compare performance parameters such as power consumption, frame speed and latency. Architecture 1 consists of hardware machine vision modules modeled at Register Transfer (RT level and a soft-core processor on a single FPGA chip. Architecture 2 is commercially available software based smart camera, Matrox Iris GT. Architecture 3 is a two-chip solution composed of hardware machine vision modules on FPGA and an external microcontroller. Results from a performance comparison show that Architecture 2 has higher latency and consumes much more power than Architecture 1 and 3. However, Architecture 2 benefits from an easy programming model. Smart camera system with FPGA and external microcontroller has lower latency and consumes less power as compared to single FPGA chip having hardware modules and soft-core processor.

  2. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    Science.gov (United States)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  3. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera

    Directory of Open Access Journals (Sweden)

    Yufu Qu

    2018-01-01

    Full Text Available In order to reconstruct three-dimensional (3D structures from an image sequence captured by unmanned aerial vehicles’ camera (UAVs and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth–map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  4. Rapid 3D Reconstruction for Image Sequence Acquired from UAV Camera.

    Science.gov (United States)

    Qu, Yufu; Huang, Jianyu; Zhang, Xuan

    2018-01-14

    In order to reconstruct three-dimensional (3D) structures from an image sequence captured by unmanned aerial vehicles' camera (UAVs) and improve the processing speed, we propose a rapid 3D reconstruction method that is based on an image queue, considering the continuity and relevance of UAV camera images. The proposed approach first compresses the feature points of each image into three principal component points by using the principal component analysis method. In order to select the key images suitable for 3D reconstruction, the principal component points are used to estimate the interrelationships between images. Second, these key images are inserted into a fixed-length image queue. The positions and orientations of the images are calculated, and the 3D coordinates of the feature points are estimated using weighted bundle adjustment. With this structural information, the depth maps of these images can be calculated. Next, we update the image queue by deleting some of the old images and inserting some new images into the queue, and a structural calculation of all the images can be performed by repeating the previous steps. Finally, a dense 3D point cloud can be obtained using the depth-map fusion method. The experimental results indicate that when the texture of the images is complex and the number of images exceeds 100, the proposed method can improve the calculation speed by more than a factor of four with almost no loss of precision. Furthermore, as the number of images increases, the improvement in the calculation speed will become more noticeable.

  5. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Yanchao Dong

    2016-07-01

    Full Text Available The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  6. 3D vision system for intelligent milking robot automation

    Science.gov (United States)

    Akhloufi, M. A.

    2013-12-01

    In a milking robot, the correct localization and positioning of milking teat cups is of very high importance. The milking robots technology has not changed since a decade and is based primarily on laser profiles for teats approximate positions estimation. This technology has reached its limit and does not allow optimal positioning of the milking cups. Also, in the presence of occlusions, the milking robot fails to milk the cow. These problems, have economic consequences for producers and animal health (e.g. development of mastitis). To overcome the limitations of current robots, we have developed a new system based on 3D vision, capable of efficiently positioning the milking cups. A prototype of an intelligent robot system based on 3D vision for real-time positioning of a milking robot has been built and tested under various conditions on a synthetic udder model (in static and moving scenarios). Experimental tests, were performed using 3D Time-Of-Flight (TOF) and RGBD cameras. The proposed algorithms permit the online segmentation of teats by combing 2D and 3D visual information. The obtained results permit the teat 3D position computation. This information is then sent to the milking robot for teat cups positioning. The vision system has a real-time performance and monitors the optimal positioning of the cups even in the presence of motion. The obtained results, with both TOF and RGBD cameras, show the good performance of the proposed system. The best performance was obtained with RGBD cameras. This latter technology will be used in future real life experimental tests.

  7. Recursive 3D-reconstruction of structured scenes using a moving camera - application to robotics

    International Nuclear Information System (INIS)

    Boukarri, Bachir

    1989-01-01

    This thesis is devoted to the perception of a structured environment, and proposes a new method which allows a 3D-reconstruction of an interesting part of the world using a mobile camera. Our work is divided into three essential parts dedicated to 2D-information aspect, 3D-information aspect, and a validation of the method. In the first part, we present a method which produces a topologic and geometric image representation based on 'segment' and 'junction' features. Then, a 2D-matching method based on a hypothesis prediction and verification algorithm is proposed to match features issued from two successive images. The second part deals with 3D-reconstruction using a triangulation technique, and discuses our new method introducing an 'Estimation-Construction-Fusion' process. This ensures a complete and accurate 3D-representation, and a permanent position estimation of the camera with respect to the model. The merging process allows refinement of the 3D-representation using a powerful tool: a Kalman Filter. In the last part, experimental results issued from simulated and real data images are reported to show the efficiency of the method. (author) [fr

  8. A real-time networked camera system : a scheduled distributed camera system reduces the latency

    NARCIS (Netherlands)

    Karatoy, H.

    2012-01-01

    This report presents the results of a Real-time Networked Camera System, com-missioned by the SAN Group in TU/e. Distributed Systems are motivated by two reasons, the first reason is the physical environment as a requirement and the second reason is to provide a better Quality of Service (QoS). This

  9. Towards real-time non contact spatial resolved oxygenation monitoring using a multi spectral filter array camera in various light conditions

    Science.gov (United States)

    Bauer, Jacob R.; van Beekum, Karlijn; Klaessens, John; Noordmans, Herke Jan; Boer, Christa; Hardeberg, Jon Y.; Verdaasdonk, Rudolf M.

    2018-02-01

    Non contact spatial resolved oxygenation measurements remain an open challenge in the biomedical field and non contact patient monitoring. Although point measurements are the clinical standard till this day, regional differences in the oxygenation will improve the quality and safety of care. Recent developments in spectral imaging resulted in spectral filter array cameras (SFA). These provide the means to acquire spatial spectral videos in real-time and allow a spatial approach to spectroscopy. In this study, the performance of a 25 channel near infrared SFA camera was studied to obtain spatial oxygenation maps of hands during an occlusion of the left upper arm in 7 healthy volunteers. For comparison a clinical oxygenation monitoring system, INVOS, was used as a reference. In case of the NIRS SFA camera, oxygenation curves were derived from 2-3 wavelength bands with a custom made fast analysis software using a basic algorithm. Dynamic oxygenation changes were determined with the NIR SFA camera and INVOS system at different regional locations of the occluded versus non-occluded hands and showed to be in good agreement. To increase the signal to noise ratio, algorithm and image acquisition were optimised. The measurement were robust to different illumination conditions with NIR light sources. This study shows that imaging of relative oxygenation changes over larger body areas is potentially possible in real time.

  10. Accurate estimation of camera shot noise in the real-time

    Science.gov (United States)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the

  11. Deformation analysis of a sinkhole in Thuringia using multi-temporal multi-view stereo 3D reconstruction data

    Science.gov (United States)

    Petschko, Helene; Goetz, Jason; Schmidt, Sven

    2017-04-01

    Sinkholes are a serious threat on life, personal property and infrastructure in large parts of Thuringia. Over 9000 sinkholes have been documented by the Geological Survey of Thuringia, which are caused by collapsing hollows which formed due to solution processes within the local bedrock material. However, little is known about surface processes and their dynamics at the flanks of the sinkhole once the sinkhole has shaped. These processes are of high interest as they might lead to dangerous situations at or within the vicinity of the sinkhole. Our objective was the analysis of these deformations over time in 3D by applying terrestrial photogrammetry with a simple DSLR camera. Within this study, we performed an analysis of deformations within a sinkhole close to Bad Frankenhausen (Thuringia) using terrestrial photogrammetry and multi-view stereo 3D reconstruction to obtain a 3D point cloud describing the morphology of the sinkhole. This was performed for multiple data collection campaigns over a 6-month period. The photos of the sinkhole were taken with a Nikon D3000 SLR Camera. For the comparison of the point clouds the Multiscale Model to Model Comparison (M3C2) plugin of the software CloudCompare was used. It allows to apply advanced methods of point cloud difference calculation which considers the co-registration error between two point clouds for assessing the significance of the calculated difference (given in meters). Three Styrofoam cuboids of known dimensions (16 cm wide/29 cm high/11.5 cm deep) were placed within the sinkhole to test the accuracy of the point cloud difference calculation. The multi-view stereo 3D reconstruction was performed with Agisoft Photoscan. Preliminary analysis indicates that about 26% of the sinkhole showed changes exceeding the co-registration error of the point clouds. The areas of change can mainly be detected on the flanks of the sinkhole and on an earth pillar that formed in the center of the sinkhole. These changes describe

  12. Real-time 3-D space numerical shake prediction for earthquake early warning

    Science.gov (United States)

    Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang

    2017-12-01

    In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

  13. The Performance Evaluation of Multi-Image 3d Reconstruction Software with Different Sensors

    Science.gov (United States)

    Mousavi, V.; Khosravi, M.; Ahmadi, M.; Noori, N.; Naveh, A. Hosseini; Varshosaz, M.

    2015-12-01

    Today, multi-image 3D reconstruction is an active research field and generating three dimensional model of the objects is one the most discussed issues in Photogrammetry and Computer Vision that can be accomplished using range-based or image-based methods. Very accurate and dense point clouds generated by range-based methods such as structured light systems and laser scanners has introduced them as reliable tools in the industry. Image-based 3D digitization methodologies offer the option of reconstructing an object by a set of unordered images that depict it from different viewpoints. As their hardware requirements are narrowed down to a digital camera and a computer system, they compose an attractive 3D digitization approach, consequently, although range-based methods are generally very accurate, image-based methods are low-cost and can be easily used by non-professional users. One of the factors affecting the accuracy of the obtained model in image-based methods is the software and algorithm used to generate three dimensional model. These algorithms are provided in the form of commercial software, open source and web-based services. Another important factor in the accuracy of the obtained model is the type of sensor used. Due to availability of mobile sensors to the public, popularity of professional sensors and the advent of stereo sensors, a comparison of these three sensors plays an effective role in evaluating and finding the optimized method to generate three-dimensional models. Lots of research has been accomplished to identify a suitable software and algorithm to achieve an accurate and complete model, however little attention is paid to the type of sensors used and its effects on the quality of the final model. The purpose of this paper is deliberation and the introduction of an appropriate combination of a sensor and software to provide a complete model with the highest accuracy. To do this, different software, used in previous studies, were compared and

  14. Automatic respiration tracking for radiotherapy using optical 3D camera

    Science.gov (United States)

    Li, Tuotuo; Geng, Jason; Li, Shidong

    2013-03-01

    Rapid optical three-dimensional (O3D) imaging systems provide accurate digitized 3D surface data in real-time, with no patient contact nor radiation. The accurate 3D surface images offer crucial information in image-guided radiation therapy (IGRT) treatments for accurate patient repositioning and respiration management. However, applications of O3D imaging techniques to image-guided radiotherapy have been clinically challenged by body deformation, pathological and anatomical variations among individual patients, extremely high dimensionality of the 3D surface data, and irregular respiration motion. In existing clinical radiation therapy (RT) procedures target displacements are caused by (1) inter-fractional anatomy changes due to weight, swell, food/water intake; (2) intra-fractional variations from anatomy changes within any treatment session due to voluntary/involuntary physiologic processes (e.g. respiration, muscle relaxation); (3) patient setup misalignment in daily reposition due to user errors; and (4) changes of marker or positioning device, etc. Presently, viable solution is lacking for in-vivo tracking of target motion and anatomy changes during the beam-on time without exposing patient with additional ionized radiation or high magnet field. Current O3D-guided radiotherapy systems relay on selected points or areas in the 3D surface to track surface motion. The configuration of the marks or areas may change with time that makes it inconsistent in quantifying and interpreting the respiration patterns. To meet the challenge of performing real-time respiration tracking using O3D imaging technology in IGRT, we propose a new approach to automatic respiration motion analysis based on linear dimensionality reduction technique based on PCA (principle component analysis). Optical 3D image sequence is decomposed with principle component analysis into a limited number of independent (orthogonal) motion patterns (a low dimension eigen-space span by eigen-vectors). New

  15. Real-Time 3D Profile Measurement Using Structured Light

    International Nuclear Information System (INIS)

    Xu, L; Zhang, Z J; Ma, H; Yu, Y J

    2006-01-01

    The paper builds a real-time system of 3D profile measurement using structured-light imaging. It allows a hand-held object to rotate free in the space-time coded light field, which is projected by the projector. The surface of measured objects with projected coded light is imaged; the system shows surface reconstruction results of objects online. This feedback helps user to adjust object's pose in the light field according to the dismissed or error data, which would achieve the integrality of data used in reconstruction. This method can acquire denser data cloud and have higher reconstruction accuracy and efficiency. According to the real-time requirements, the paper presents the non-restricted light plane modelling which suits stripe structured light system, designs the three-frame stripes space-time coded pattern, and uses the advance ICP algorithms to acquire 3D data alignment from multiple view

  16. Design of an experimental four-camera setup for enhanced 3D surface reconstruction in microsurgery

    Directory of Open Access Journals (Sweden)

    Marzi Christian

    2017-09-01

    Full Text Available Future fully digital surgical visualization systems enable a wide range of new options. Caused by optomechanical limitations a main disadvantage of today’s surgical microscopes is their incapability of providing arbitrary perspectives to more than two observers. In a fully digital microscopic system, multiple arbitrary views can be generated from a 3D reconstruction. Modern surgical microscopes allow replacing the eyepieces by cameras in order to record stereoscopic videos. A reconstruction from these videos can only contain the amount of detail the recording camera system gathers from the scene. Therefore, covered surfaces can result in a faulty reconstruction for deviating stereoscopic perspectives. By adding cameras recording the object from different angles, additional information of the scene is acquired, allowing to improve the reconstruction. Our approach is to use a fixed four-camera setup as a front-end system to capture enhanced 3D topography of a pseudo-surgical scene. This experimental setup would provide images for the reconstruction algorithms and generation of multiple observing stereo perspectives. The concept of the designed setup is based on the common main objective (CMO principle of current surgical microscopes. These systems are well established and optically mature. Furthermore, the CMO principle allows a more compact design and a lowered effort in calibration than cameras with separate optics. Behind the CMO four pupils separate the four channels which are recorded by one camera each. The designed system captures an area of approximately 28mm × 28mm with four cameras. Thus, allowing to process images of 6 different stereo perspectives. In order to verify the setup, it is modelled in silico. It can be used in further studies to test algorithms for 3D reconstruction from up to four perspectives and provide information about the impact of additionally recorded perspectives on the enhancement of a reconstruction.

  17. Novel methods for real-time 3D facial recognition

    OpenAIRE

    Rodrigues, Marcos; Robinson, Alan

    2010-01-01

    In this paper we discuss our approach to real-time 3D face recognition. We argue the need for real time operation in a realistic scenario and highlight the required pre- and post-processing operations for effective 3D facial recognition. We focus attention to some operations including face and eye detection, and fast post-processing operations such as hole filling, mesh smoothing and noise removal. We consider strategies for hole filling such as bilinear and polynomial interpolation and Lapla...

  18. Synthesized view comparison method for no-reference 3D image quality assessment

    Science.gov (United States)

    Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun

    2018-04-01

    We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.

  19. Nonlaser-based 3D surface imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Shin-yee; Johnson, R.K.; Sherwood, R.J. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    3D surface imaging refers to methods that generate a 3D surface representation of objects of a scene under viewing. Laser-based 3D surface imaging systems are commonly used in manufacturing, robotics and biomedical research. Although laser-based systems provide satisfactory solutions for most applications, there are situations where non laser-based approaches are preferred. The issues that make alternative methods sometimes more attractive are: (1) real-time data capturing, (2) eye-safety, (3) portability, and (4) work distance. The focus of this presentation is on generating a 3D surface from multiple 2D projected images using CCD cameras, without a laser light source. Two methods are presented: stereo vision and depth-from-focus. Their applications are described.

  20. The use of consumer depth cameras for 3D surface imaging of people with obesity: A feasibility study.

    Science.gov (United States)

    Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R

    2018-05-21

    Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  1. Real-time tracking with a 3D-flow processor array

    International Nuclear Information System (INIS)

    Crosetto, D.

    1993-01-01

    The problem of real-time track-finding has been performed to date with CAM (Content Addressable Memories) or with fast coincidence logic, because the processing scheme was though to have much slower performance. Advances in technology together with a new architectural approach make it feasible to also explore the computing technique for real-time track finding thus giving the advantages of implementing algorithms that can find more parameters such as calculate the sagitta, curvature, pt, etc. with respect to the CAM approach. This report describes real-time track finding using a new computing approach technique based on the 3D-flow array processor system. This system consists of a fixed interconnection architexture scheme, allowing flexible algorithm implementation on a scalable platform. The 3D-Flow parallel processing system for track finding is scalable in size and performance by either increasing the number of processors, or increasing the speed or else the number of pipelined stages. The present article describes the conceptual idea and the design stage of the project

  2. Real-time tracking with a 3D-Flow processor array

    International Nuclear Information System (INIS)

    Crosetto, D.

    1993-06-01

    The problem of real-time track-finding has been performed to date with CAM (Content Addressable Memories) or with fast coincidence logic, because the processing scheme was thought to have much slower performance. Advances in technology together with a new architectural approach make it feasible to also explore the computing technique for real-time track finding thus giving the advantages of implementing algorithms that can find more parameters such as calculate the sagitta, curvature, pt, etc., with respect to the CAM approach. The report describes real-time track finding using new computing approach technique based on the 3D-Flow array processor system. This system consists of a fixed interconnection architecture scheme, allowing flexible algorithm implementation on a scalable platform. The 3D-Flow parallel processing system for track finding is scalable in size and performance by either increasing the number of processors, or increasing the speed or else the number of pipelined stages. The present article describes the conceptual idea and the design stage of the project

  3. Real-time microscopic 3D shape measurement based on optimized pulse-width-modulation binary fringe projection

    Science.gov (United States)

    Hu, Yan; Chen, Qian; Feng, Shijie; Tao, Tianyang; Li, Hui; Zuo, Chao

    2017-07-01

    In recent years, tremendous progress has been made in 3D measurement techniques, contributing to the realization of faster and more accurate 3D measurement. As a representative of these techniques, fringe projection profilometry (FPP) has become a commonly used method for real-time 3D measurement, such as real-time quality control and online inspection. To date, most related research has been concerned with macroscopic 3D measurement, but microscopic 3D measurement, especially real-time microscopic 3D measurement, is rarely reported. However, microscopic 3D measurement plays an important role in 3D metrology and is indispensable in some applications in measuring micro scale objects like the accurate metrology of MEMS components of the final devices to ensure their proper performance. In this paper, we proposed a method which effectively combines optimized binary structured patterns with a number-theoretical phase unwrapping algorithm to realize real-time microscopic 3D measurement. A slight defocusing of our optimized binary patterns can considerably alleviate the measurement error based on four-step phase-shifting FPP, providing the binary patterns with a comparable performance to ideal sinusoidal patterns. The static measurement accuracy can reach 8 μm, and the experimental results of a vibrating earphone diaphragm reveal that our system can successfully realize real-time 3D measurement of 120 frames per second (FPS) with a measurement range of 8~\\text{mm}× 6~\\text{mm} in lateral and 8 mm in depth.

  4. First results of the multi-purpose real-time processing video camera system on the Wendelstein 7-X stellarator and implications for future devices

    Science.gov (United States)

    Zoletnik, S.; Biedermann, C.; Cseh, G.; Kocsis, G.; König, R.; Szabolics, T.; Szepesi, T.; Wendelstein 7-X Team

    2018-01-01

    A special video camera has been developed for the 10-camera overview video system of the Wendelstein 7-X (W7-X) stellarator considering multiple application needs and limitations resulting from this complex long-pulse superconducting stellarator experiment. The event detection intelligent camera (EDICAM) uses a special 1.3 Mpixel CMOS sensor with non-destructive read capability which enables fast monitoring of smaller Regions of Interest (ROIs) even during long exposures. The camera can perform simple data evaluation algorithms (minimum/maximum, mean comparison to levels) on the ROI data which can dynamically change the readout process and generate output signals. Multiple EDICAM cameras were operated in the first campaign of W7-X and capabilities were explored in the real environment. Data prove that the camera can be used for taking long exposure (10-100 ms) overview images of the plasma while sub-ms monitoring and even multi-camera correlated edge plasma turbulence measurements of smaller areas can be done in parallel. These latter revealed that filamentary turbulence structures extend between neighboring modules of the stellarator. Considerations emerging for future upgrades of this system and similar setups on future long-pulse fusion experiments such as ITER are discussed.

  5. A Simple Setup to Perform 3D Locomotion Tracking in Zebrafish by Using a Single Camera

    Directory of Open Access Journals (Sweden)

    Gilbert Audira

    2018-02-01

    Full Text Available Generally, the measurement of three-dimensional (3D swimming behavior in zebrafish relies on commercial software or requires sophisticated scripts, and depends on more than two cameras to capture the video. Here, we establish a simple and economic apparatus to detect 3D locomotion in zebrafish, which involves a single camera capture system that records zebrafish movement in a specially designed water tank with a mirror tilted at 45 degrees. The recorded videos are analyzed using idTracker, while spatial positions are calibrated by ImageJ software and 3D trajectories are plotted by Origin 9.1 software. This easy setting allowed scientists to track 3D swimming behavior of multiple zebrafish with low cost and precise spatial position, showing great potential for fish behavioral research in the future.

  6. Real-Time 3D Motion capture by monocular vision and virtual rendering

    OpenAIRE

    Gomez Jauregui , David Antonio; Horain , Patrick

    2012-01-01

    International audience; Avatars in networked 3D virtual environments allow users to interact over the Internet and to get some feeling of virtual telepresence. However, avatar control may be tedious. Motion capture systems based on 3D sensors have recently reached the consumer market, but webcams and camera-phones are more widespread and cheaper. The proposed demonstration aims at animating a user's avatar from real time 3D motion capture by monoscopic computer vision, thus allowing virtual t...

  7. Single-frame 3D human pose recovery from multiple views

    NARCIS (Netherlands)

    Hofmann, M.; Gavrila, D.M.

    2009-01-01

    We present a system for the estimation of unconstrained 3D human upper body pose from multi-camera single-frame views. Pose recovery starts with a shape detection stage where candidate poses are generated based on hierarchical exemplar matching in the individual camera views. The hierarchy used in

  8. 3-D computer graphics based on integral photography.

    Science.gov (United States)

    Naemura, T; Yoshida, T; Harashima, H

    2001-02-12

    Integral photography (IP), which is one of the ideal 3-D photographic technologies, can be regarded as a method of capturing and displaying light rays passing through a plane. The NHK Science and Technical Research Laboratories have developed a real-time IP system using an HDTV camera and an optical fiber array. In this paper, the authors propose a method of synthesizing arbitrary views from IP images captured by the HDTV camera. This is a kind of image-based rendering system, founded on the 4-D data space Representation of light rays. Experimental results show the potential to improve the quality of images rendered by computer graphics techniques.

  9. A Comparison of Iterative 2D-3D Pose Estimation Methods for Real-Time Applications

    DEFF Research Database (Denmark)

    Grest, Daniel; Krüger, Volker; Petersen, Thomas

    2009-01-01

    This work compares iterative 2D-3D Pose Estimation methods for use in real-time applications. The compared methods are available for public as C++ code. One method is part of the openCV library, namely POSIT. Because POSIT is not applicable for planar 3Dpoint congurations, we include the planar P...

  10. MO-FG-CAMPUS-JeP3-04: Feasibility Study of Real-Time Ultrasound Monitoring for Abdominal Stereotactic Body Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Su, Lin; Kien Ng, Sook; Zhang, Ying; Herman, Joseph; Wong, John; Ding, Kai [Department of Radiation Oncology, John Hopkins University, Baltimore, MD (United States); Ji, Tianlong [Department of Radiation Oncology, The First Hospital of China Medical University, Shenyang, Liaoning (China); Iordachita, Iulian [Department of Mechanical Engineering, Johns Hopkins University, Baltimore, MD (United States); Tutkun Sen, H.; Kazanzides, Peter; Lediju Bell, Muyinatu A. [Department of Computer Science, Johns Hopkins University, Baltimore, MD (United States)

    2016-06-15

    Purpose: Ultrasound is ideal for real-time monitoring in radiotherapy with high soft tissue contrast, non-ionization, portability, and cost effectiveness. Few studies investigated clinical application of real-time ultrasound monitoring for abdominal stereotactic body radiation therapy (SBRT). This study aims to demonstrate the feasibility of real-time monitoring of 3D target motion using 4D ultrasound. Methods: An ultrasound probe holding system was designed to allow clinician to freely move and lock ultrasound probe. For phantom study, an abdominal ultrasound phantom was secured on a 2D programmable respiratory motion stage. One side of the stage was elevated than another side to generate 3D motion. The motion stage made periodic breath-hold movement. Phantom movement tracked by infrared camera was considered as ground truth. For volunteer study three healthy subjects underwent the same setup for abdominal SBRT with active breath control (ABC). 4D ultrasound B-mode images were acquired for both phantom and volunteers for real-time monitoring. 10 breath-hold cycles were monitored for each experiment. For phantom, the target motion tracked by ultrasound was compared with motion tracked by infrared camera. For healthy volunteers, the reproducibility of ABC breath-hold was evaluated. Results: Volunteer study showed the ultrasound system fitted well to the clinical SBRT setup. The reproducibility for 10 breath-holds is less than 2 mm in three directions for all three volunteers. For phantom study the motion between inspiration and expiration captured by camera (ground truth) is 2.35±0.02 mm, 1.28±0.04 mm, 8.85±0.03 mm in LR, AP, SI directly, respectively. The motion monitored by ultrasound is 2.21±0.07 mm, 1.32±0.12mm, 9.10±0.08mm, respectively. The motion monitoring error in any direction is less than 0.5 mm. Conclusion: The volunteer study proved the clinical feasibility of real-time ultrasound monitoring for abdominal SBRT. The phantom and volunteer ABC

  11. MO-FG-CAMPUS-JeP3-04: Feasibility Study of Real-Time Ultrasound Monitoring for Abdominal Stereotactic Body Radiation Therapy

    International Nuclear Information System (INIS)

    Su, Lin; Kien Ng, Sook; Zhang, Ying; Herman, Joseph; Wong, John; Ding, Kai; Ji, Tianlong; Iordachita, Iulian; Tutkun Sen, H.; Kazanzides, Peter; Lediju Bell, Muyinatu A.

    2016-01-01

    Purpose: Ultrasound is ideal for real-time monitoring in radiotherapy with high soft tissue contrast, non-ionization, portability, and cost effectiveness. Few studies investigated clinical application of real-time ultrasound monitoring for abdominal stereotactic body radiation therapy (SBRT). This study aims to demonstrate the feasibility of real-time monitoring of 3D target motion using 4D ultrasound. Methods: An ultrasound probe holding system was designed to allow clinician to freely move and lock ultrasound probe. For phantom study, an abdominal ultrasound phantom was secured on a 2D programmable respiratory motion stage. One side of the stage was elevated than another side to generate 3D motion. The motion stage made periodic breath-hold movement. Phantom movement tracked by infrared camera was considered as ground truth. For volunteer study three healthy subjects underwent the same setup for abdominal SBRT with active breath control (ABC). 4D ultrasound B-mode images were acquired for both phantom and volunteers for real-time monitoring. 10 breath-hold cycles were monitored for each experiment. For phantom, the target motion tracked by ultrasound was compared with motion tracked by infrared camera. For healthy volunteers, the reproducibility of ABC breath-hold was evaluated. Results: Volunteer study showed the ultrasound system fitted well to the clinical SBRT setup. The reproducibility for 10 breath-holds is less than 2 mm in three directions for all three volunteers. For phantom study the motion between inspiration and expiration captured by camera (ground truth) is 2.35±0.02 mm, 1.28±0.04 mm, 8.85±0.03 mm in LR, AP, SI directly, respectively. The motion monitored by ultrasound is 2.21±0.07 mm, 1.32±0.12mm, 9.10±0.08mm, respectively. The motion monitoring error in any direction is less than 0.5 mm. Conclusion: The volunteer study proved the clinical feasibility of real-time ultrasound monitoring for abdominal SBRT. The phantom and volunteer ABC

  12. Winter precipitation particle size distribution measurement by Multi-Angle Snowflake Camera

    Science.gov (United States)

    Huang, Gwo-Jong; Kleinkort, Cameron; Bringi, V. N.; Notaroš, Branislav M.

    2017-12-01

    From the radar meteorology viewpoint, the most important properties for quantitative precipitation estimation of winter events are 3D shape, size, and mass of precipitation particles, as well as the particle size distribution (PSD). In order to measure these properties precisely, optical instruments may be the best choice. The Multi-Angle Snowflake Camera (MASC) is a relatively new instrument equipped with three high-resolution cameras to capture the winter precipitation particle images from three non-parallel angles, in addition to measuring the particle fall speed using two pairs of infrared motion sensors. However, the results from the MASC so far are usually presented as monthly or seasonally, and particle sizes are given as histograms, no previous studies have used the MASC for a single storm study, and no researchers use MASC to measure the PSD. We propose the methodology for obtaining the winter precipitation PSD measured by the MASC, and present and discuss the development, implementation, and application of the new technique for PSD computation based on MASC images. Overall, this is the first study of the MASC-based PSD. We present PSD MASC experiments and results for segments of two snow events to demonstrate the performance of our PSD algorithm. The results show that the self-consistency of the MASC measured single-camera PSDs is good. To cross-validate PSD measurements, we compare MASC mean PSD (averaged over three cameras) with the collocated 2D Video Disdrometer, and observe good agreements of the two sets of results.

  13. The design of red-blue 3D video fusion system based on DM642

    Science.gov (United States)

    Fu, Rongguo; Luo, Hao; Lv, Jin; Feng, Shu; Wei, Yifang; Zhang, Hao

    2016-10-01

    Aiming at the uncertainty of traditional 3D video capturing including camera focal lengths, distance and angle parameters between two cameras, a red-blue 3D video fusion system based on DM642 hardware processing platform is designed with the parallel optical axis. In view of the brightness reduction of traditional 3D video, the brightness enhancement algorithm based on human visual characteristics is proposed and the luminance component processing method based on YCbCr color space is also proposed. The BIOS real-time operating system is used to improve the real-time performance. The video processing circuit with the core of DM642 enhances the brightness of the images, then converts the video signals of YCbCr to RGB and extracts the R component from one camera, so does the other video and G, B component are extracted synchronously, outputs 3D fusion images finally. The real-time adjustments such as translation and scaling of the two color components are realized through the serial communication between the VC software and BIOS. The system with the method of adding red-blue components reduces the lost of the chrominance components and makes the picture color saturation reduce to more than 95% of the original. Enhancement algorithm after optimization to reduce the amount of data fusion in the processing of video is used to reduce the fusion time and watching effect is improved. Experimental results show that the system can capture images in near distance, output red-blue 3D video and presents the nice experiences to the audience wearing red-blue glasses.

  14. Multi-Angle Snowflake Camera Value-Added Product

    Energy Technology Data Exchange (ETDEWEB)

    Shkurko, Konstantin [Univ. of Utah, Salt Lake City, UT (United States); Garrett, T. [Univ. of Utah, Salt Lake City, UT (United States); Gaustad, K [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2016-12-01

    The Multi-Angle Snowflake Camera (MASC) addresses a need for high-resolution multi-angle imaging of hydrometeors in freefall with simultaneous measurement of fallspeed. As illustrated in Figure 1, the MASC consists of three cameras, separated by 36°, each pointing at an identical focal point approximately 10 cm away. Located immediately above each camera, a light aims directly at the center of depth of field for its corresponding camera. The focal point at which the cameras are aimed lies within a ring through which hydrometeors fall. The ring houses a system of near-infrared emitter-detector pairs, arranged in two arrays separated vertically by 32 mm. When hydrometeors pass through the lower array, they simultaneously trigger all cameras and lights. Fallspeed is calculated from the time it takes to traverse the distance between the upper and lower triggering arrays. The trigger electronics filter out ambient light fluctuations associated with varying sunlight and shadows. The microprocessor onboard the MASC controls the camera system and communicates with the personal computer (PC). The image data is sent via FireWire 800 line, and fallspeed (and camera control) is sent via a Universal Serial Bus (USB) line that relies on RS232-over-USB serial conversion. See Table 1 for specific details on the MASC located at the Oliktok Point Mobile Facility on the North Slope of Alaska. The value-added product (VAP) detailed in this documentation analyzes the raw data (Section 2.0) using Python: images rely on OpenCV image processing library and derived aggregated statistics rely on some clever averaging. See Sections 4.1 and 4.2 for more details on what variables are computed.

  15. Multi-spectral CCD camera system for ocean water color and seacoast observation

    Science.gov (United States)

    Zhu, Min; Chen, Shiping; Wu, Yanlin; Huang, Qiaolin; Jin, Weiqi

    2001-10-01

    One of the earth observing instruments on HY-1 Satellite which will be launched in 2001, the multi-spectral CCD camera system, is developed by Beijing Institute of Space Mechanics & Electricity (BISME), Chinese Academy of Space Technology (CAST). In 798 km orbit, the system can provide images with 250 m ground resolution and a swath of 500 km. It is mainly used for coast zone dynamic mapping and oceanic watercolor monitoring, which include the pollution of offshore and coast zone, plant cover, watercolor, ice, terrain underwater, suspended sediment, mudflat, soil and vapor gross. The multi- spectral camera system is composed of four monocolor CCD cameras, which are line array-based, 'push-broom' scanning cameras, and responding for four spectral bands. The camera system adapts view field registration; that is, each camera scans the same region at the same moment. Each of them contains optics, focal plane assembly, electrical circuit, installation structure, calibration system, thermal control and so on. The primary features on the camera system are: (1) Offset of the central wavelength is better than 5 nm; (2) Degree of polarization is less than 0.5%; (3) Signal-to-noise ratio is about 1000; (4) Dynamic range is better than 2000:1; (5) Registration precision is better than 0.3 pixel; (6) Quantization value is 12 bit.

  16. Multi-view and 3D deformable part models.

    Science.gov (United States)

    Pepik, Bojan; Stark, Michael; Gehler, Peter; Schiele, Bernt

    2015-11-01

    As objects are inherently 3D, they have been modeled in 3D in the early days of computer vision. Due to the ambiguities arising from mapping 2D features to 3D models, 3D object representations have been neglected and 2D feature-based models are the predominant paradigm in object detection nowadays. While such models have achieved outstanding bounding box detection performance, they come with limited expressiveness, as they are clearly limited in their capability of reasoning about 3D shape or viewpoints. In this work, we bring the worlds of 3D and 2D object representations closer, by building an object detector which leverages the expressive power of 3D object representations while at the same time can be robustly matched to image evidence. To that end, we gradually extend the successful deformable part model [1] to include viewpoint information and part-level 3D geometry information, resulting in several different models with different level of expressiveness. We end up with a 3D object model, consisting of multiple object parts represented in 3D and a continuous appearance model. We experimentally verify that our models, while providing richer object hypotheses than the 2D object models, provide consistently better joint object localization and viewpoint estimation than the state-of-the-art multi-view and 3D object detectors on various benchmarks (KITTI [2] , 3D object classes [3] , Pascal3D+ [4] , Pascal VOC 2007 [5] , EPFL multi-view cars[6] ).

  17. IPS – A SYSTEM FOR REAL-TIME NAVIGATION AND 3D MODELING

    Directory of Open Access Journals (Sweden)

    D. Grießbach

    2012-07-01

    Full Text Available fdaReliable navigation and 3D modeling is a necessary requirement for any autonomous system in real world scenarios. German Aerospace Center (DLR developed a system providing precise information about local position and orientation of a mobile platform as well as three-dimensional information about its environment in real-time. This system, called Integral Positioning System (IPS can be applied for indoor environments and outdoor environments. To achieve high precision, reliability, integrity and availability a multi-sensor approach was chosen. The important role of sensor data synchronization, system calibration and spatial referencing is emphasized because the data from several sensors has to be fused using a Kalman filter. A hardware operating system (HW-OS is presented, that facilitates the low-level integration of different interfaces. The benefit of this approach is an increased precision of synchronization at the expense of additional engineering costs. It will be shown that the additional effort is leveraged by the new design concept since the HW-OS methodology allows a proven, flexible and fast design process, a high re-usability of common components and consequently a higher reliability within the low-level sensor fusion. Another main focus of the paper is on IPS software. The DLR developed, implemented and tested a flexible and extensible software concept for data grabbing, efficient data handling, data preprocessing (e.g. image rectification being essential for thematic data processing. Standard outputs of IPS are a trajectory of the moving platform and a high density 3D point cloud of the current environment. This information is provided in real-time. Based on these results, information processing on more abstract levels can be executed.

  18. Real-Time Acquisition of High Quality Face Sequences from an Active Pan-Tilt-Zoom Camera

    DEFF Research Database (Denmark)

    Haque, Mohammad A.; Nasrollahi, Kamal; Moeslund, Thomas B.

    2013-01-01

    -based real-time high-quality face image acquisition system, which utilizes pan-tilt-zoom parameters of a camera to focus on a human face in a scene and employs a face quality assessment method to log the best quality faces from the captured frames. The system consists of four modules: face detection, camera...... control, face tracking, and face quality assessment before logging. Experimental results show that the proposed system can effectively log the high quality faces from the active camera in real-time (an average of 61.74ms was spent per frame) with an accuracy of 85.27% compared to human annotated data.......Traditional still camera-based facial image acquisition systems in surveillance applications produce low quality face images. This is mainly due to the distance between the camera and subjects of interest. Furthermore, people in such videos usually move around, change their head poses, and facial...

  19. The multi-camera optical surveillance system (MOS)

    International Nuclear Information System (INIS)

    Otto, P.; Wagner, H.; Richter, B.; Gaertner, K.J.; Laszlo, G.; Neumann, G.

    1991-01-01

    The transition from film camera to video surveillance systems, in particular the implementation of high capacity multi-camera video systems, results in a large increase in the amount of recorded scenes. Consequently, there is a substantial increase in the manpower requirements for review. Moreover, modern microprocessor controlled equipment facilitates the collection of additional data associated with each scene. Both the scene and the annotated information have to be evaluated by the inspector. The design of video surveillance systems for safeguards necessarily has to account for both appropriate recording and reviewing techniques. An aspect of principal importance is that the video information is stored on tape. Under the German Support Programme to the Agency a technical concept has been developed which aims at optimizing the capabilities of a multi-camera optical surveillance (MOS) system including the reviewing technique. This concept is presented in the following paper including a discussion of reviewing and reliability

  20. Accuracy of Real-time Couch Tracking During 3-dimensional Conformal Radiation Therapy, Intensity Modulated Radiation Therapy, and Volumetric Modulated Arc Therapy for Prostate Cancer

    International Nuclear Information System (INIS)

    Wilbert, Juergen; Baier, Kurt; Hermann, Christian; Flentje, Michael; Guckenberger, Matthias

    2013-01-01

    Purpose: To evaluate the accuracy of real-time couch tracking for prostate cancer. Methods and Materials: Intrafractional motion trajectories of 15 prostate cancer patients were the basis for this phantom study; prostate motion had been monitored with the Calypso System. An industrial robot moved a phantom along these trajectories, motion was detected via an infrared camera system, and the robotic HexaPOD couch was used for real-time counter-steering. Residual phantom motion during real-time tracking was measured with the infrared camera system. Film dosimetry was performed during delivery of 3-dimensional conformal radiation therapy (3D-CRT), step-and-shoot intensity modulated radiation therapy (IMRT), and volumetric modulated arc therapy (VMAT). Results: Motion of the prostate was largest in the anterior–posterior direction, with systematic (∑) and random (σ) errors of 2.3 mm and 2.9 mm, respectively; the prostate was outside a threshold of 5 mm (3D vector) for 25.0%±19.8% of treatment time. Real-time tracking reduced prostate motion to ∑=0.01 mm and σ = 0.55 mm in the anterior–posterior direction; the prostate remained within a 1-mm and 5-mm threshold for 93.9%±4.6% and 99.7%±0.4% of the time, respectively. Without real-time tracking, pass rates based on a γ index of 2%/2 mm in film dosimetry ranged between 66% and 72% for 3D-CRT, IMRT, and VMAT, on average. Real-time tracking increased pass rates to minimum 98% on average for 3D-CRT, IMRT, and VMAT. Conclusions: Real-time couch tracking resulted in submillimeter accuracy for prostate cancer, which transferred into high dosimetric accuracy independently of whether 3D-CRT, IMRT, or VMAT was used.

  1. High speed display algorithm for 3D medical images using Multi Layer Range Image

    International Nuclear Information System (INIS)

    Ban, Hideyuki; Suzuki, Ryuuichi

    1993-01-01

    We propose high speed algorithm that display 3D voxel images obtained from medical imaging systems such as MRI. This algorithm convert voxel image data to 6 Multi Layer Range Image (MLRI) data, which is an augmentation of the range image data. To avoid the calculation for invisible voxels, the algorithm selects at most 3 MLRI data from 6 in accordance with the view direction. The proposed algorithm displays 256 x 256 x 256 voxel data within 0.6 seconds using 22 MIPS Workstation without a special hardware such as Graphics Engine. Real-time display will be possible on 100 MIPS class Workstation by our algorithm. (author)

  2. The evaluation of single-view and multi-view fusion 3D echocardiography using image-driven segmentation and tracking.

    Science.gov (United States)

    Rajpoot, Kashif; Grau, Vicente; Noble, J Alison; Becher, Harald; Szmigielski, Cezary

    2011-08-01

    Real-time 3D echocardiography (RT3DE) promises a more objective and complete cardiac functional analysis by dynamic 3D image acquisition. Despite several efforts towards automation of left ventricle (LV) segmentation and tracking, these remain challenging research problems due to the poor-quality nature of acquired images usually containing missing anatomical information, speckle noise, and limited field-of-view (FOV). Recently, multi-view fusion 3D echocardiography has been introduced as acquiring multiple conventional single-view RT3DE images with small probe movements and fusing them together after alignment. This concept of multi-view fusion helps to improve image quality and anatomical information and extends the FOV. We now take this work further by comparing single-view and multi-view fused images in a systematic study. In order to better illustrate the differences, this work evaluates image quality and information content of single-view and multi-view fused images using image-driven LV endocardial segmentation and tracking. The image-driven methods were utilized to fully exploit image quality and anatomical information present in the image, thus purposely not including any high-level constraints like prior shape or motion knowledge in the analysis approaches. Experiments show that multi-view fused images are better suited for LV segmentation and tracking, while relatively more failures and errors were observed on single-view images. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. SPADAS: a high-speed 3D single-photon camera for advanced driver assistance systems

    Science.gov (United States)

    Bronzi, D.; Zou, Y.; Bellisai, S.; Villa, F.; Tisa, S.; Tosi, A.; Zappa, F.

    2015-02-01

    Advanced Driver Assistance Systems (ADAS) are the most advanced technologies to fight road accidents. Within ADAS, an important role is played by radar- and lidar-based sensors, which are mostly employed for collision avoidance and adaptive cruise control. Nonetheless, they have a narrow field-of-view and a limited ability to detect and differentiate objects. Standard camera-based technologies (e.g. stereovision) could balance these weaknesses, but they are currently not able to fulfill all automotive requirements (distance range, accuracy, acquisition speed, and frame-rate). To this purpose, we developed an automotive-oriented CMOS single-photon camera for optical 3D ranging based on indirect time-of-flight (iTOF) measurements. Imagers based on Single-photon avalanche diode (SPAD) arrays offer higher sensitivity with respect to CCD/CMOS rangefinders, have inherent better time resolution, higher accuracy and better linearity. Moreover, iTOF requires neither high bandwidth electronics nor short-pulsed lasers, hence allowing the development of cost-effective systems. The CMOS SPAD sensor is based on 64 × 32 pixels, each able to process both 2D intensity-data and 3D depth-ranging information, with background suppression. Pixel-level memories allow fully parallel imaging and prevents motion artefacts (skew, wobble, motion blur) and partial exposure effects, which otherwise would hinder the detection of fast moving objects. The camera is housed in an aluminum case supporting a 12 mm F/1.4 C-mount imaging lens, with a 40°×20° field-of-view. The whole system is very rugged and compact and a perfect solution for vehicle's cockpit, with dimensions of 80 mm × 45 mm × 70 mm, and less that 1 W consumption. To provide the required optical power (1.5 W, eye safe) and to allow fast (up to 25 MHz) modulation of the active illumination, we developed a modular laser source, based on five laser driver cards, with three 808 nm lasers each. We present the full characterization of

  4. SU-C-201-04: Noise and Temporal Resolution in a Near Real-Time 3D Dosimeter

    Energy Technology Data Exchange (ETDEWEB)

    Rilling, M [Department of physics, engineering physics and optics, Universite Laval, Quebec City, QC (Canada); Centre de recherche sur le cancer, Universite Laval, Quebec City, QC (Canada); Radiation oncology department, CHU de Quebec, Quebec City, QC (Canada); Center for optics, photonics and lasers, Universite Laval, Quebec City, Quebec (Canada); Goulet, M [Radiation oncology department, CHU de Quebec, Quebec City, QC (Canada); Beaulieu, L; Archambault, L [Department of physics, engineering physics and optics, Universite Laval, Quebec City, QC (Canada); Centre de recherche sur le cancer, Universite Laval, Quebec City, QC (Canada); Radiation oncology department, CHU de Quebec, Quebec City, QC (Canada); Thibault, S [Center for optics, photonics and lasers, Universite Laval, Quebec City, Quebec (Canada)

    2016-06-15

    Purpose: To characterize the performance of a real-time three-dimensional scintillation dosimeter in terms of signal-to-noise ratio (SNR) and temporal resolution of 3D dose measurements. This study quantifies its efficiency in measuring low dose levels characteristic of EBRT dynamic treatments, and in reproducing field profiles for varying multileaf collimator (MLC) speeds. Methods: The dosimeter prototype uses a plenoptic camera to acquire continuous images of the light field emitted by a 10×10×10 cm{sup 3} plastic scintillator. Using EPID acquisitions, ray tracing-based iterative tomographic algorithms allow millimeter-sized reconstruction of relative 3D dose distributions. Measurements were taken at 6MV, 400 MU/min with the scintillator centered at the isocenter, first receiving doses from 1.4 to 30.6 cGy. Dynamic measurements were then performed by closing half of the MLCs at speeds of 0.67 to 2.5 cm/s, at 0° and 90° collimator angles. A reference static half-field was obtained for measured profile comparison. Results: The SNR steadily increases as a function of dose and reaches a clinically adequate plateau of 80 at 10 cGy. Below this, the decrease in light collected and increase in pixel noise diminishes the SNR; nonetheless, the EPID acquisitions and the voxel correlation employed in the reconstruction algorithms result in suitable SNR values (>75) even at low doses. For dynamic measurements at varying MLC speeds, central relative dose profiles are characterized by gradients at %D{sub 50} of 8.48 to 22.7 %/mm. These values converge towards the 32.8 %/mm-gradient measured for the static reference field profile, but are limited by the dosimeter’s current acquisition rate of 1Hz. Conclusion: This study emphasizes the efficiency of the 3D dose distribution reconstructions, while identifying limits of the current prototype’s temporal resolution in terms of dynamic EBRT parameters. This work paves the way for providing an optimized, second

  5. SU-C-201-04: Noise and Temporal Resolution in a Near Real-Time 3D Dosimeter

    International Nuclear Information System (INIS)

    Rilling, M; Goulet, M; Beaulieu, L; Archambault, L; Thibault, S

    2016-01-01

    Purpose: To characterize the performance of a real-time three-dimensional scintillation dosimeter in terms of signal-to-noise ratio (SNR) and temporal resolution of 3D dose measurements. This study quantifies its efficiency in measuring low dose levels characteristic of EBRT dynamic treatments, and in reproducing field profiles for varying multileaf collimator (MLC) speeds. Methods: The dosimeter prototype uses a plenoptic camera to acquire continuous images of the light field emitted by a 10×10×10 cm"3 plastic scintillator. Using EPID acquisitions, ray tracing-based iterative tomographic algorithms allow millimeter-sized reconstruction of relative 3D dose distributions. Measurements were taken at 6MV, 400 MU/min with the scintillator centered at the isocenter, first receiving doses from 1.4 to 30.6 cGy. Dynamic measurements were then performed by closing half of the MLCs at speeds of 0.67 to 2.5 cm/s, at 0° and 90° collimator angles. A reference static half-field was obtained for measured profile comparison. Results: The SNR steadily increases as a function of dose and reaches a clinically adequate plateau of 80 at 10 cGy. Below this, the decrease in light collected and increase in pixel noise diminishes the SNR; nonetheless, the EPID acquisitions and the voxel correlation employed in the reconstruction algorithms result in suitable SNR values (>75) even at low doses. For dynamic measurements at varying MLC speeds, central relative dose profiles are characterized by gradients at %D_5_0 of 8.48 to 22.7 %/mm. These values converge towards the 32.8 %/mm-gradient measured for the static reference field profile, but are limited by the dosimeter’s current acquisition rate of 1Hz. Conclusion: This study emphasizes the efficiency of the 3D dose distribution reconstructions, while identifying limits of the current prototype’s temporal resolution in terms of dynamic EBRT parameters. This work paves the way for providing an optimized, second-generational real-time 3D

  6. Real-time Stereoscopic 3D for E-Robotics Learning

    Directory of Open Access Journals (Sweden)

    Richard Y. Chiou

    2011-02-01

    Full Text Available Following the design and testing of a successful 3-Dimensional surveillance system, this 3D scheme has been implemented into online robotics learning at Drexel University. A real-time application, utilizing robot controllers, programmable logic controllers and sensors, has been developed in the “MET 205 Robotics and Mechatronics” class to provide the students with a better robotic education. The integration of the 3D system allows the students to precisely program the robot and execute functions remotely. Upon the students’ recommendation, polarization has been chosen to be the main platform behind the 3D robotic system. Stereoscopic calculations are carried out for calibration purposes to display the images with the highest possible comfort-level and 3D effect. The calculations are further validated by comparing the results with students’ evaluations. Due to the Internet-based feature, multiple clients have the opportunity to perform the online automation development. In the future, students, in different universities, will be able to cross-control robotic components of different types around the world. With the development of this 3D ERobotics interface, automation resources and robotic learning can be shared and enriched regardless of location.

  7. Assessment and Calibration of a RGB-D Camera (Kinect v2 Sensor Towards a Potential Use for Close-Range 3D Modeling

    Directory of Open Access Journals (Sweden)

    Elise Lachat

    2015-10-01

    Full Text Available In the last decade, RGB-D cameras - also called range imaging cameras - have known a permanent evolution. Because of their limited cost and their ability to measure distances at a high frame rate, such sensors are especially appreciated for applications in robotics or computer vision. The Kinect v1 (Microsoft release in November 2010 promoted the use of RGB-D cameras, so that a second version of the sensor arrived on the market in July 2014. Since it is possible to obtain point clouds of an observed scene with a high frequency, one could imagine applying this type of sensors to answer to the need for 3D acquisition. However, due to the technology involved, some questions have to be considered such as, for example, the suitability and accuracy of RGB-D cameras for close range 3D modeling. In that way, the quality of the acquired data represents a major axis. In this paper, the use of a recent Kinect v2 sensor to reconstruct small objects in three dimensions has been investigated. To achieve this goal, a survey of the sensor characteristics as well as a calibration approach are relevant. After an accuracy assessment of the produced models, the benefits and drawbacks of Kinect v2 compared to the first version of the sensor and then to photogrammetry are discussed.

  8. Multi-Purpose Crew Vehicle Camera Asset Planning: Imagery Previsualization

    Science.gov (United States)

    Beaulieu, K.

    2014-01-01

    Using JSC-developed and other industry-standard off-the-shelf 3D modeling, animation, and rendering software packages, the Image Science Analysis Group (ISAG) supports Orion Project imagery planning efforts through dynamic 3D simulation and realistic previsualization of ground-, vehicle-, and air-based camera output.

  9. A 3D high-resolution gamma camera for radiopharmaceutical studies with small animals

    CERN Document Server

    Loudos, G K; Giokaris, N D; Styliaris, E; Archimandritis, S C; Varvarigou, A D; Papanicolas, C N; Majewski, S; Weisenberger, D; Pani, R; Scopinaro, F; Uzunoglu, N K; Maintas, D; Stefanis, K

    2003-01-01

    The results of studies conducted with a small field of view tomographic gamma camera based on a Position Sensitive Photomultiplier Tube are reported. The system has been used for the evaluation of radiopharmaceuticals in small animals. Phantom studies have shown a spatial resolution of 2 mm in planar and 2-3 mm in tomographic imaging. Imaging studies in mice have been carried out both in 2D and 3D. Conventional radiopharmaceuticals have been used and the results have been compared with images from a clinically used system.

  10. A SIMD-VLIW Smart Camera Architecture for Real-Time Face Recognition

    NARCIS (Netherlands)

    Kleihorst, R.P.; Broers, H.A.T.; Abbo, A.A.; Ebrahimmalek, H.; Fatemi, H.; Corporaal, H.; Jonker, P.P.

    2003-01-01

    There is a rapidly growing demand for using smart cameras for various applications in surveillance and identification. Although having a small form-factor, most of these applications demand huge processing performance for real-time processing. Face recognition is one of those applications. In this

  11. Combining Front Vehicle Detection with 3D Pose Estimation for a Better Driver Assistance

    Directory of Open Access Journals (Sweden)

    Yu Peng

    2012-09-01

    Full Text Available Driver assistant systems enhance traffic safety and efficiency. The accurate 3D pose of a front vehicle can help a driver to make the right decision on the road. We propose a novel real-time system to estimate the 3D pose of the front vehicle. This system consists of two parallel threads: vehicle rear tracking and mapping. The vehicle rear is first identified in the video captured by an onboard camera, after license plate localization and foreground extraction. The 3D pose estimation technique is then employed with respect to the extracted vehicle rear. Most current 3D pose estimation techniques need prior models or a stereo initialization with user cooperation. It is extremely difficult to obtain prior models due to the varying appearance of vehicles' rears. Moreover, it is unsafe to ask for drivers' cooperation when a vehicle is running. In our system, two initial keyframes for stereo algorithms are automatically extracted by vehicle rear detection and tracking. Map points are defined as a collection of point features extracted from the vehicle's rear with their 3D information. These map points are inferences that relate the 2D features detected in following vehicles' rears with the 3D world. The relative 3D pose of the onboard camera to the front vehicle rear is then estimated through matching the map points with point features detected on the front vehicle rear. We demonstrate the capabilities of our system by testing on real-time and synthesized videos. In order to make the experimental analysis visible, we demonstrated an estimated 3D pose through augmented reality, which needs accurate and real-time 3D pose estimation.

  12. Calibration of an Outdoor Distributed Camera Network with a 3D Point Cloud

    Directory of Open Access Journals (Sweden)

    Agustín Ortega

    2014-07-01

    Full Text Available Outdoor camera networks are becoming ubiquitous in critical urban areas of the largest cities around the world. Although current applications of camera networks are mostly tailored to video surveillance, recent research projects are exploiting their use to aid robotic systems in people-assisting tasks. Such systems require precise calibration of the internal and external parameters of the distributed camera network. Despite the fact that camera calibration has been an extensively studied topic, the development of practical methods for user-assisted calibration that minimize user intervention time and maximize precision still pose significant challenges. These camera systems have non-overlapping fields of view, are subject to environmental stress, and are likely to suffer frequent recalibration. In this paper, we propose the use of a 3D map covering the area to support the calibration process and develop an automated method that allows quick and precise calibration of a large camera network. We present two cases of study of the proposed calibration method: one is the calibration of the Barcelona Robot Lab camera network, which also includes direct mappings (homographies between image coordinates and world points in the ground plane (walking areas to support person and robot detection and localization algorithms. The second case consist of improving the GPS positioning of geo-tagged images taken with a mobile device in the Facultat de Matemàtiques i Estadística (FME patio at the Universitat Politècnica de Catalunya (UPC.

  13. 3D multi-view convolutional neural networks for lung nodule classification

    Science.gov (United States)

    Kang, Guixia; Hou, Beibei; Zhang, Ningbo

    2017-01-01

    The 3D convolutional neural network (CNN) is able to make full use of the spatial 3D context information of lung nodules, and the multi-view strategy has been shown to be useful for improving the performance of 2D CNN in classifying lung nodules. In this paper, we explore the classification of lung nodules using the 3D multi-view convolutional neural networks (MV-CNN) with both chain architecture and directed acyclic graph architecture, including 3D Inception and 3D Inception-ResNet. All networks employ the multi-view-one-network strategy. We conduct a binary classification (benign and malignant) and a ternary classification (benign, primary malignant and metastatic malignant) on Computed Tomography (CT) images from Lung Image Database Consortium and Image Database Resource Initiative database (LIDC-IDRI). All results are obtained via 10-fold cross validation. As regards the MV-CNN with chain architecture, results show that the performance of 3D MV-CNN surpasses that of 2D MV-CNN by a significant margin. Finally, a 3D Inception network achieved an error rate of 4.59% for the binary classification and 7.70% for the ternary classification, both of which represent superior results for the corresponding task. We compare the multi-view-one-network strategy with the one-view-one-network strategy. The results reveal that the multi-view-one-network strategy can achieve a lower error rate than the one-view-one-network strategy. PMID:29145492

  14. An embedded real-time red peach detection system based on an OV7670 camera, ARM Cortex-M4 processor and 3D Look-Up Tables

    OpenAIRE

    Teixidó Cairol, Mercè; Font Calafell, Davinia; Pallejà Cabrè, Tomàs; Tresánchez Ribes, Marcel; Nogués Aymamí, Miquel; Palacín Roca, Jordi

    2012-01-01

    This work proposes the development of an embedded real-time fruit detection system for future automatic fruit harvesting. The proposed embedded system is based on an ARM Cortex-M4 (STM32F407VGT6) processor and an Omnivision OV7670 color camera. The future goal of this embedded vision system will be to control a robotized arm to automatically select and pick some fruit directly from the tree. The complete embedded system has been designed to be placed directly in the gripper tool of the future...

  15. Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion.

    Science.gov (United States)

    Leimkuhler, Thomas; Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter

    2018-06-01

    We propose a system to infer binocular disparity from a monocular video stream in real-time. Different from classic reconstruction of physical depth in computer vision, we compute perceptually plausible disparity, that is numerically inaccurate, but results in a very similar overall depth impression with plausible overall layout, sharp edges, fine details and agreement between luminance and disparity. We use several simple monocular cues to estimate disparity maps and confidence maps of low spatial and temporal resolution in real-time. These are complemented by spatially-varying, appearance-dependent and class-specific disparity prior maps, learned from example stereo images. Scene classification selects this prior at runtime. Fusion of prior and cues is done by means of robust MAP inference on a dense spatio-temporal conditional random field with high spatial and temporal resolution. Using normal distributions allows this in constant-time, parallel per-pixel work. We compare our approach to previous 2D-to-3D conversion systems in terms of different metrics, as well as a user study and validate our notion of perceptually plausible disparity.

  16. Human Body 3D Posture Estimation Using Significant Points and Two Cameras

    Science.gov (United States)

    Juang, Chia-Feng; Chen, Teng-Chang; Du, Wei-Chin

    2014-01-01

    This paper proposes a three-dimensional (3D) human posture estimation system that locates 3D significant body points based on 2D body contours extracted from two cameras without using any depth sensors. The 3D significant body points that are located by this system include the head, the center of the body, the tips of the feet, the tips of the hands, the elbows, and the knees. First, a linear support vector machine- (SVM-) based segmentation method is proposed to distinguish the human body from the background in red, green, and blue (RGB) color space. The SVM-based segmentation method uses not only normalized color differences but also included angle between pixels in the current frame and the background in order to reduce shadow influence. After segmentation, 2D significant points in each of the two extracted images are located. A significant point volume matching (SPVM) method is then proposed to reconstruct the 3D significant body point locations by using 2D posture estimation results. Experimental results show that the proposed SVM-based segmentation method shows better performance than other gray level- and RGB-based segmentation approaches. This paper also shows the effectiveness of the 3D posture estimation results in different postures. PMID:24883422

  17. Strain measurement of abdominal aortic aneurysm with real-time 3D ultrasound speckle tracking.

    Science.gov (United States)

    Bihari, P; Shelke, A; Nwe, T H; Mularczyk, M; Nelson, K; Schmandra, T; Knez, P; Schmitz-Rixen, T

    2013-04-01

    Abdominal aortic aneurysm rupture is caused by mechanical vascular tissue failure. Although mechanical properties within the aneurysm vary, currently available ultrasound methods assess only one cross-sectional segment of the aorta. This study aims to establish real-time 3-dimensional (3D) speckle tracking ultrasound to explore local displacement and strain parameters of the whole abdominal aortic aneurysm. Validation was performed on a silicone aneurysm model, perfused in a pulsatile artificial circulatory system. Wall motion of the silicone model was measured simultaneously with a commercial real-time 3D speckle tracking ultrasound system and either with laser-scan micrometry or with video photogrammetry. After validation, 3D ultrasound data were collected from abdominal aortic aneurysms of five patients and displacement and strain parameters were analysed. Displacement parameters measured in vitro by 3D ultrasound and laser scan micrometer or video analysis were significantly correlated at pulse pressures between 40 and 80 mmHg. Strong local differences in displacement and strain were identified within the aortic aneurysms of patients. Local wall strain of the whole abdominal aortic aneurysm can be analysed in vivo with real-time 3D ultrasound speckle tracking imaging, offering the prospect of individual non-invasive rupture risk analysis of abdominal aortic aneurysms. Copyright © 2013 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  18. SU-D-213-03: Towards An Optimized 3D Scintillation Dosimetry Tool for Quality Assurance of Dynamic Radiotherapy Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Rilling, M [Département de physique, de génie physique et d’optique, Université Laval, Quebec City, QC (Canada); Centre de Recherche sur le Cancer, Hôtel-Dieu de Québec, Quebec City, QC (Canada); Département de radio-oncologie, CHU de Québec, Quebec City, QC (Canada); Center for Optics, Photonics and Lasers, Université Laval, Quebec City, QC, CA (Canada); Goulet, M [Département de radio-oncologie, CHU de Québec, Quebec City, QC (Canada); Thibault, S [Département de physique, de génie physique et d’optique, Université Laval, Quebec City, QC (Canada); Center for Optics, Photonics and Lasers, Université Laval, Quebec City, QC, CA (Canada); Archambault, L [Département de physique, de génie physique et d’optique, Université Laval, Quebec City, QC (Canada); Centre de Recherche sur le Cancer, Hôtel-Dieu de Québec, Quebec City, QC (Canada); Département de radio-oncologie, CHU de Québec, Quebec City, QC (Canada)

    2015-06-15

    Purpose: The purpose of this work is to simulate a multi-focus plenoptic camera used as the measuring device in a real-time three-dimensional scintillation dosimeter. Simulating and optimizing this realistic optical system will bridge the technological gap between concept validation and a clinically viable tool that can provide highly efficient, accurate and precise measurements for dynamic radiotherapy techniques. Methods: The experimental prototype, previously developed for proof of concept purposes, uses an off-the-shelf multi-focus plenoptic camera. With an array of interleaved microlenses of different focal lengths, this camera records spatial and angular information of light emitted by a plastic scintillator volume. The three distinct microlens focal lengths were determined experimentally for use as baseline parameters by measuring image-to-object magnification for different distances in object space. A simulated plenoptic system was implemented using the non-sequential ray tracing software Zemax: this tool allows complete simulation of multiple optical paths by modeling interactions at interfaces such as scatter, diffraction, reflection and refraction. The active sensor was modeled based on the camera manufacturer specifications by a 2048×2048, 5 µm-pixel pitch sensor. Planar light sources, simulating the plastic scintillator volume, were employed for ray tracing simulations. Results: The microlens focal lengths were determined to be 384, 327 and 290 µm. A realistic multi-focus plenoptic system, with independently defined and optimizable specifications, was fully simulated. A f/2.9 and 54 mm-focal length Double Gauss objective was modeled as the system’s main lens. A three-focal length hexagonal microlens array of 250-µm thickness was designed, acting as an image-relay system between the main lens and sensor. Conclusion: Simulation of a fully modeled multi-focus plenoptic camera enables the decoupled optimization of the main lens and microlens

  19. Pipeline inwall 3D measurement system based on the cross structured light

    Science.gov (United States)

    Shen, Da; Lin, Zhipeng; Xue, Lei; Zheng, Qiang; Wang, Zichi

    2014-01-01

    In order to accurately realize the defect detection of pipeline inwall, this paper proposes a measurement system made up of cross structured light, single CCD camera and a smart car, etc. Based on structured light measurement technology, this paper mainly introduces the structured light measurement system, the imaging mathematical model, and the parameters and method of camera calibration. Using these measuring principles and methods, the camera in remote control car platform achieves continuous shooting of objects and real-time rebound processing as well as utilizing established model to extract 3D point cloud coordinate to reconstruct pipeline defects, so it is possible to achieve 3D automatic measuring, and verifies the correctness and feasibility of this system. It has been found that this system has great measurement accuracy in practice.

  20. TLS for generating multi-LOD of 3D building model

    International Nuclear Information System (INIS)

    Akmalia, R; Setan, H; Majid, Z; Suwardhi, D; Chong, A

    2014-01-01

    The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown

  1. TLS for generating multi-LOD of 3D building model

    Science.gov (United States)

    Akmalia, R.; Setan, H.; Majid, Z.; Suwardhi, D.; Chong, A.

    2014-02-01

    The popularity of Terrestrial Laser Scanners (TLS) to capture three dimensional (3D) objects has been used widely for various applications. Development in 3D models has also led people to visualize the environment in 3D. Visualization of objects in a city environment in 3D can be useful for many applications. However, different applications require different kind of 3D models. Since a building is an important object, CityGML has defined a standard for 3D building models at four different levels of detail (LOD). In this research, the advantages of TLS for capturing buildings and the modelling process of the point cloud can be explored. TLS will be used to capture all the building details to generate multi-LOD. This task, in previous works, involves usually the integration of several sensors. However, in this research, point cloud from TLS will be processed to generate the LOD3 model. LOD2 and LOD1 will then be generalized from the resulting LOD3 model. Result from this research is a guiding process to generate the multi-LOD of 3D building starting from LOD3 using TLS. Lastly, the visualization for multi-LOD model will also be shown.

  2. Visual tracking for multi-modality computer-assisted image guidance

    Science.gov (United States)

    Basafa, Ehsan; Foroughi, Pezhman; Hossbach, Martin; Bhanushali, Jasmine; Stolka, Philipp

    2017-03-01

    With optical cameras, many interventional navigation tasks previously relying on EM, optical, or mechanical guidance can be performed robustly, quickly, and conveniently. We developed a family of novel guidance systems based on wide-spectrum cameras and vision algorithms for real-time tracking of interventional instruments and multi-modality markers. These navigation systems support the localization of anatomical targets, support placement of imaging probe and instruments, and provide fusion imaging. The unique architecture - low-cost, miniature, in-hand stereo vision cameras fitted directly to imaging probes - allows for an intuitive workflow that fits a wide variety of specialties such as anesthesiology, interventional radiology, interventional oncology, emergency medicine, urology, and others, many of which see increasing pressure to utilize medical imaging and especially ultrasound, but have yet to develop the requisite skills for reliable success. We developed a modular system, consisting of hardware (the Optical Head containing the mini cameras) and software (components for visual instrument tracking with or without specialized visual features, fully automated marker segmentation from a variety of 3D imaging modalities, visual observation of meshes of widely separated markers, instant automatic registration, and target tracking and guidance on real-time multi-modality fusion views). From these components, we implemented a family of distinct clinical and pre-clinical systems (for combinations of ultrasound, CT, CBCT, and MRI), most of which have international regulatory clearance for clinical use. We present technical and clinical results on phantoms, ex- and in-vivo animals, and patients.

  3. Multiple Camera Person Tracking in Multiple Layers Combining 2D and 3D Information

    OpenAIRE

    Arsié , Dejan; Schuller , Björn; Rigoll , Gerhard

    2008-01-01

    International audience; CCTV systems have been introduced in most public spaces in order to increase security. Video outputs are observed by human operators if possible but mostly used as a forensic tool. Therefore it seems desirable to automate video surveillance systems, in order to be able to detect potentially dangerous situations as soon as possible. Multi camera systems have seem to be the prerequisite for huge spaces where frequently occlusions appear. In this treatise we will present ...

  4. Real-Time 3D Visualization

    Science.gov (United States)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  5. Spatiotemporal Segmentation and Modeling of the Mitral Valve in Real-Time 3D Echocardiographic Images.

    Science.gov (United States)

    Pouch, Alison M; Aly, Ahmed H; Lai, Eric K; Yushkevich, Natalie; Stoffers, Rutger H; Gorman, Joseph H; Cheung, Albert T; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2017-09-01

    Transesophageal echocardiography is the primary imaging modality for preoperative assessment of mitral valves with ischemic mitral regurgitation (IMR). While there are well known echocardiographic insights into the 3D morphology of mitral valves with IMR, such as annular dilation and leaflet tethering, less is understood about how quantification of valve dynamics can inform surgical treatment of IMR or predict short-term recurrence of the disease. As a step towards filling this knowledge gap, we present a novel framework for 4D segmentation and geometric modeling of the mitral valve in real-time 3D echocardiography (rt-3DE). The framework integrates multi-atlas label fusion and template-based medial modeling to generate quantitatively descriptive models of valve dynamics. The novelty of this work is that temporal consistency in the rt-3DE segmentations is enforced during both the segmentation and modeling stages with the use of groupwise label fusion and Kalman filtering. The algorithm is evaluated on rt-3DE data series from 10 patients: five with normal mitral valve morphology and five with severe IMR. In these 10 data series that total 207 individual 3DE images, each 3DE segmentation is validated against manual tracing and temporal consistency between segmentations is demonstrated. The ultimate goal is to generate accurate and consistent representations of valve dynamics that can both visually and quantitatively provide insight into normal and pathological valve function.

  6. Fuzzy logic based power-efficient real-time multi-core system

    CERN Document Server

    Ahmed, Jameel; Najam, Shaheryar; Najam, Zohaib

    2017-01-01

    This book focuses on identifying the performance challenges involved in computer architectures, optimal configuration settings and analysing their impact on the performance of multi-core architectures. Proposing a power and throughput-aware fuzzy-logic-based reconfiguration for Multi-Processor Systems on Chip (MPSoCs) in both simulation and real-time environments, it is divided into two major parts. The first part deals with the simulation-based power and throughput-aware fuzzy logic reconfiguration for multi-core architectures, presenting the results of a detailed analysis on the factors impacting the power consumption and performance of MPSoCs. In turn, the second part highlights the real-time implementation of fuzzy-logic-based power-efficient reconfigurable multi-core architectures for Intel and Leone3 processors. .

  7. An ROI multi-resolution compression method for 3D-HEVC

    Science.gov (United States)

    Ti, Chunli; Guan, Yudong; Xu, Guodong; Teng, Yidan; Miao, Xinyuan

    2017-09-01

    3D High Efficiency Video Coding (3D-HEVC) provides a significant potential on increasing the compression ratio of multi-view RGB-D videos. However, the bit rate still rises dramatically with the improvement of the video resolution, which will bring challenges to the transmission network, especially the mobile network. This paper propose an ROI multi-resolution compression method for 3D-HEVC to better preserve the information in ROI on condition of limited bandwidth. This is realized primarily through ROI extraction and compression multi-resolution preprocessed video as alternative data according to the network conditions. At first, the semantic contours are detected by the modified structured forests to restrain the color textures inside objects. The ROI is then determined utilizing the contour neighborhood along with the face region and foreground area of the scene. Secondly, the RGB-D videos are divided into slices and compressed via 3D-HEVC under different resolutions for selection by the audiences and applications. Afterwards, the reconstructed low-resolution videos from 3D-HEVC encoder are directly up-sampled via Laplace transformation and used to replace the non-ROI areas of the high-resolution videos. Finally, the ROI multi-resolution compressed slices are obtained by compressing the ROI preprocessed videos with 3D-HEVC. The temporal and special details of non-ROI are reduced in the low-resolution videos, so the ROI will be better preserved by the encoder automatically. Experiments indicate that the proposed method can keep the key high-frequency information with subjective significance while the bit rate is reduced.

  8. A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology

    Science.gov (United States)

    Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi

    2015-01-01

    Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187

  9. Adaptive Probabilistic Tracking Embedded in Smart Cameras for Distributed Surveillance in a 3D Model

    Directory of Open Access Journals (Sweden)

    Sven Fleck

    2006-12-01

    Full Text Available Tracking applications based on distributed and embedded sensor networks are emerging today, both in the fields of surveillance and industrial vision. Traditional centralized approaches have several drawbacks, due to limited communication bandwidth, computational requirements, and thus limited spatial camera resolution and frame rate. In this article, we present network-enabled smart cameras for probabilistic tracking. They are capable of tracking objects adaptively in real time and offer a very bandwidthconservative approach, as the whole computation is performed embedded in each smart camera and only the tracking results are transmitted, which are on a higher level of abstraction. Based on this, we present a distributed surveillance system. The smart cameras' tracking results are embedded in an integrated 3D environment as live textures and can be viewed from arbitrary perspectives. Also a georeferenced live visualization embedded in Google Earth is presented.

  10. Adaptive Probabilistic Tracking Embedded in Smart Cameras for Distributed Surveillance in a 3D Model

    Directory of Open Access Journals (Sweden)

    Fleck Sven

    2007-01-01

    Full Text Available Tracking applications based on distributed and embedded sensor networks are emerging today, both in the fields of surveillance and industrial vision. Traditional centralized approaches have several drawbacks, due to limited communication bandwidth, computational requirements, and thus limited spatial camera resolution and frame rate. In this article, we present network-enabled smart cameras for probabilistic tracking. They are capable of tracking objects adaptively in real time and offer a very bandwidthconservative approach, as the whole computation is performed embedded in each smart camera and only the tracking results are transmitted, which are on a higher level of abstraction. Based on this, we present a distributed surveillance system. The smart cameras' tracking results are embedded in an integrated 3D environment as live textures and can be viewed from arbitrary perspectives. Also a georeferenced live visualization embedded in Google Earth is presented.

  11. Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera

    Science.gov (United States)

    Dziri, Aziz; Duranton, Marc; Chapuis, Roland

    2016-07-01

    Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.

  12. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    Directory of Open Access Journals (Sweden)

    Sergio Orts-Escolano

    2014-04-01

    Full Text Available In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units. It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mobile agents in the scene. It is also necessary to integrate the vision module into a global system that operates in a complex environment by receiving images from multiple acquisition devices at video frequency. Offering relevant information to higher level systems, monitoring and making decisions in real time, it must accomplish a set of requirements, such as: time constraints, high availability, robustness, high processing speed and re-configurability. We have built a system able to represent and analyze the motion in video acquired by a multi-camera network and to process multi-source data in parallel on a multi-GPU architecture.

  13. 3D Face modeling using the multi-deformable method.

    Science.gov (United States)

    Hwang, Jinkyu; Yu, Sunjin; Kim, Joongrock; Lee, Sangyoun

    2012-09-25

    In this paper, we focus on the problem of the accuracy performance of 3D face modeling techniques using corresponding features in multiple views, which is quite sensitive to feature extraction errors. To solve the problem, we adopt a statistical model-based 3D face modeling approach in a mirror system consisting of two mirrors and a camera. The overall procedure of our 3D facial modeling method has two primary steps: 3D facial shape estimation using a multiple 3D face deformable model and texture mapping using seamless cloning that is a type of gradient-domain blending. To evaluate our method's performance, we generate 3D faces of 30 individuals and then carry out two tests: accuracy test and robustness test. Our method shows not only highly accurate 3D face shape results when compared with the ground truth, but also robustness to feature extraction errors. Moreover, 3D face rendering results intuitively show that our method is more robust to feature extraction errors than other 3D face modeling methods. An additional contribution of our method is that a wide range of face textures can be acquired by the mirror system. By using this texture map, we generate realistic 3D face for individuals at the end of the paper.

  14. Marginalised Stacked Denoising Autoencoders for Robust Representation of Real-Time Multi-View Action Recognition

    Directory of Open Access Journals (Sweden)

    Feng Gu

    2015-07-01

    Full Text Available Multi-view action recognition has gained a great interest in video surveillance, human computer interaction, and multimedia retrieval, where multiple cameras of different types are deployed to provide a complementary field of views. Fusion of multiple camera views evidently leads to more robust decisions on both tracking multiple targets and analysing complex human activities, especially where there are occlusions. In this paper, we incorporate the marginalised stacked denoising autoencoders (mSDA algorithm to further improve the bag of words (BoWs representation in terms of robustness and usefulness for multi-view action recognition. The resulting representations are fed into three simple fusion strategies as well as a multiple kernel learning algorithm at the classification stage. Based on the internal evaluation, the codebook size of BoWs and the number of layers of mSDA may not significantly affect recognition performance. According to results on three multi-view benchmark datasets, the proposed framework improves recognition performance across all three datasets and outputs record recognition performance, beating the state-of-art algorithms in the literature. It is also capable of performing real-time action recognition at a frame rate ranging from 33 to 45, which could be further improved by using more powerful machines in future applications.

  15. A 3D freehand ultrasound system for multi-view reconstructions from sparse 2D scanning planes.

    Science.gov (United States)

    Yu, Honggang; Pattichis, Marios S; Agurto, Carla; Beth Goens, M

    2011-01-20

    A significant limitation of existing 3D ultrasound systems comes from the fact that the majority of them work with fixed acquisition geometries. As a result, the users have very limited control over the geometry of the 2D scanning planes. We present a low-cost and flexible ultrasound imaging system that integrates several image processing components to allow for 3D reconstructions from limited numbers of 2D image planes and multiple acoustic views. Our approach is based on a 3D freehand ultrasound system that allows users to control the 2D acquisition imaging using conventional 2D probes.For reliable performance, we develop new methods for image segmentation and robust multi-view registration. We first present a new hybrid geometric level-set approach that provides reliable segmentation performance with relatively simple initializations and minimum edge leakage. Optimization of the segmentation model parameters and its effect on performance is carefully discussed. Second, using the segmented images, a new coarse to fine automatic multi-view registration method is introduced. The approach uses a 3D Hotelling transform to initialize an optimization search. Then, the fine scale feature-based registration is performed using a robust, non-linear least squares algorithm. The robustness of the multi-view registration system allows for accurate 3D reconstructions from sparse 2D image planes. Volume measurements from multi-view 3D reconstructions are found to be consistently and significantly more accurate than measurements from single view reconstructions. The volume error of multi-view reconstruction is measured to be less than 5% of the true volume. We show that volume reconstruction accuracy is a function of the total number of 2D image planes and the number of views for calibrated phantom. In clinical in-vivo cardiac experiments, we show that volume estimates of the left ventricle from multi-view reconstructions are found to be in better agreement with clinical

  16. Development of a tomographic system adapted to 3D measurement of contaminated wounds based on the Cacao concept (Computer aided collimation Gamma Camera); Developpement a partir du concept CACAO (Camera A Collimation Assistee par Ordinateur) d'un systeme tomographique adapte a la mesure 3D de plaies contaminees

    Energy Technology Data Exchange (ETDEWEB)

    Douiri, A

    2002-03-01

    The computer aided collimation gamma camera (CACAO in French) is a gamma camera using a collimator with large holes, a supplementary linear scanning motion during the acquisition and a dedicated reconstruction program taking full account of the source depth. The CACAO system was introduced to improve both the sensitivity and the resolution in nuclear medicine. This thesis focuses on the design of a fast and robust reconstruction algorithm in the CACAO project. We start by an overview of tomographic imaging techniques in nuclear medicine. After modelling the physical CACAO system, we present the complete reconstruction program which involves three steps: 1) shift and sum 2) deconvolution and filtering 3) rotation and sum. The deconvolution is the critical step that decreases the signal to noise ratio of the reconstructed images. We propose a regularized multi-channel algorithm to solve the deconvolution problem. We also present a fast algorithm based on Splines functions and preserving the high quality of the reconstructed images for the shift and the rotation steps. Comparisons of simulated reconstructed images in 2D and 3D for the conventional system (CPHC) and CACAO demonstrate the ability of CACAO system to increase the quality of the SPECT images. Finally, this study concludes with an experimental approach with a pixellated detector conceived for a 3D measurement of contaminated wounds. This experimentation proves the possible advantages of coupling the CACAO project with pixellated detectors. Moreover, a variety of applications could fully benefit from the CACAO system, such as low activity imaging, the use of high-energy gamma isotopes and the visualization of deep organs. Moreover the combination of the CACAO system with a pixels detector may open up further possibilities for the future of nuclear medicine. (author)

  17. Multi-view 3D Human Pose Estimation in Complex Environment

    NARCIS (Netherlands)

    Hofmann, K.M.; Gavrila, D.M.

    2012-01-01

    We introduce a framework for unconstrained 3D human upper body pose estimation from multiple camera views in complex environment. Its main novelty lies in the integration of three components: single-frame pose recovery, temporal integration and model texture adaptation. Single-frame pose recovery

  18. Test bed for real-time image acquisition and processing systems based on FlexRIO, CameraLink, and EPICS

    International Nuclear Information System (INIS)

    Barrera, E.; Ruiz, M.; Sanz, D.; Vega, J.; Castro, R.; Juárez, E.; Salvador, R.

    2014-01-01

    Highlights: • The test bed allows for the validation of real-time image processing techniques. • Offers FPGA (FlexRIO) image processing that does not require CPU intervention. • Is fully compatible with the architecture of the ITER Fast Controllers. • Provides flexibility and easy integration in distributed experiments based on EPICS. - Abstract: Image diagnostics are becoming standard ones in nuclear fusion. At present, images are typically analyzed off-line. However, real-time processing is occasionally required (for instance, hot-spot detection or pattern recognition tasks), which will be the objective for the next generation of fusion devices. In this paper, a test bed for image generation, acquisition, and real-time processing is presented. The proposed solution is built using a Camera Link simulator, a Camera Link frame-grabber, a PXIe chassis, and offers software interface with EPICS. The Camera Link simulator (PCIe card PCIe8 DVa C-Link from Engineering Design Team) generates simulated image data (for example, from video-movies stored in fusion databases) using a Camera Link interface to mimic the frame sequences produced with diagnostic cameras. The Camera Link frame-grabber (FlexRIO Solution from National Instruments) includes a field programmable gate array (FPGA) for image acquisition using a Camera Link interface; the FPGA allows for the codification of ad-hoc image processing algorithms using LabVIEW/FPGA software. The frame grabber is integrated in a PXIe chassis with system architecture similar to that of the ITER Fast Controllers, and the frame grabber provides a software interface with EPICS to program all of its functionalities, capture the images, and perform the required image processing. The use of these four elements allows for the implementation of a test bed system that permits the development and validation of real-time image processing techniques in an architecture that is fully compatible with that of the ITER Fast Controllers

  19. Cloud Forecasting and 3-D Radiative Transfer Model Validation using Citizen-Sourced Imagery

    Science.gov (United States)

    Gasiewski, A. J.; Heymsfield, A.; Newman Frey, K.; Davis, R.; Rapp, J.; Bansemer, A.; Coon, T.; Folsom, R.; Pfeufer, N.; Kalloor, J.

    2017-12-01

    Cloud radiative feedback mechanisms are one of the largest sources of uncertainty in global climate models. Variations in local 3D cloud structure impact the interpretation of NASA CERES and MODIS data for top-of-atmosphere radiation studies over clouds. Much of this uncertainty results from lack of knowledge of cloud vertical and horizontal structure. Surface-based data on 3-D cloud structure from a multi-sensor array of low-latency ground-based cameras can be used to intercompare radiative transfer models based on MODIS and other satellite data with CERES data to improve the 3-D cloud parameterizations. Closely related, forecasting of solar insolation and associated cloud cover on time scales out to 1 hour and with spatial resolution of 100 meters is valuable for stabilizing power grids with high solar photovoltaic penetrations. Data for cloud-advection based solar insolation forecasting with requisite spatial resolution and latency needed to predict high ramp rate events obtained from a bottom-up perspective is strongly correlated with cloud-induced fluctuations. The development of grid management practices for improved integration of renewable solar energy thus also benefits from a multi-sensor camera array. The data needs for both 3D cloud radiation modelling and solar forecasting are being addressed using a network of low-cost upward-looking visible light CCD sky cameras positioned at 2 km spacing over an area of 30-60 km in size acquiring imagery on 30 second intervals. Such cameras can be manufactured in quantity and deployed by citizen volunteers at a marginal cost of 200-400 and operated unattended using existing communications infrastructure. A trial phase to understand the potential utility of up-looking multi-sensor visible imagery is underway within this NASA Citizen Science project. To develop the initial data sets necessary to optimally design a multi-sensor cloud camera array a team of 100 citizen scientists using self-owned PDA cameras is being

  20. Stability Analysis for a Multi-Camera Photogrammetric System

    Directory of Open Access Journals (Sweden)

    Ayman Habib

    2014-08-01

    Full Text Available Consumer-grade digital cameras suffer from geometrical instability that may cause problems when used in photogrammetric applications. This paper provides a comprehensive review of this issue of interior orientation parameter variation over time, it explains the common ways used for coping with the issue, and describes the existing methods for performing stability analysis for a single camera. The paper then points out the lack of coverage of stability analysis for multi-camera systems, suggests a modification of the collinearity model to be used for the calibration of an entire photogrammetric system, and proposes three methods for system stability analysis. The proposed methods explore the impact of the changes in interior orientation and relative orientation/mounting parameters on the reconstruction process. Rather than relying on ground truth in real datasets to check the system calibration stability, the proposed methods are simulation-based. Experiment results are shown, where a multi-camera photogrammetric system was calibrated three times, and stability analysis was performed on the system calibration parameters from the three sessions. The proposed simulation-based methods provided results that were compatible with a real-data based approach for evaluating the impact of changes in the system calibration parameters on the three-dimensional reconstruction.

  1. Collaboration system for simulation using commercial Web3D

    International Nuclear Information System (INIS)

    Okamoto, Koji; Ohkubo, Kohei

    2004-01-01

    The Web-3D system has been widely used in the internet. It can show the 3D environment easily and friendly. In order to develop the network collaboration system, the Web-3D system is used as the front end of the visualization tool. The 3D geometries have been transferred from the server using HTTP with the viewpoint, one of the commercialized Web-3D. The simulation results are directly transferred to the client using the TCP/IP socket with JAVA. The viewpoint can be controlled by the JAVA, so the transferred simulation data are displayed on the web, in real-time. The multi-client system enables the visualization of the real-time simulation results with remote site. The same results are shown on the remote web site, simultaneously. This means the remote collaboration can be achievable for the real-time simulation. Also, the system has the feedback system, which control the simulation parameter remotely. In this prototype system, the key feature of the collaboration system are discussed using the viewpoint as the frontend. (author)

  2. 3D Printing Multi-Functionality: Embedded RF Antennas and Components

    Science.gov (United States)

    Shemelya, C. M.; Zemba, M.; Liang, M.; Espalin, D.; Kief, C.; Xin, H.; Wicker, R. B.; MacDonald, E. W.

    2015-01-01

    Significant research and press has recently focused on the fabrication freedom of Additive Manufacturing (AM) to create both conceptual models and final end-use products. This flexibility allows design modifications to be immediately reflected in 3D printed structures, creating new paradigms within the manufacturing process. 3D printed products will inevitably be fabricated locally, with unit-level customization, optimized to unique mission requirements. However, for the technology to be universally adopted, the processes must be enhanced to incorporate additional technologies; such as electronics, actuation, and electromagnetics. Recently, a novel 3D printing platform, Multi3D manufacturing, was funded by the presidential initiative for revitalizing manufacturing in the USA using 3D printing (America Makes - also known as the National Additive Manufacturing Innovation Institute). The Multi3D system specifically targets 3D printed electronics in arbitrary form; and building upon the potential of this system, this paper describes RF antennas and components fabricated through the integration of material extrusion 3D printing with embedded wire, mesh, and RF elements.

  3. An intelligent space for mobile robot localization using a multi-camera system.

    Science.gov (United States)

    Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel

    2014-08-15

    This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  4. An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Mariana Rampinelli

    2014-08-01

    Full Text Available This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.

  5. Multi-person tracking with overlapping cameras in complex, dynamic environments

    NARCIS (Netherlands)

    Liem, M.; Gavrila, D.M.

    2009-01-01

    This paper presents a multi-camera system to track multiple persons in complex, dynamic environments. Position measurements are obtained by carving out the space defined by foreground regions in the overlapping camera views and projecting these onto blobs on the ground plane. Person appearance is

  6. D Reconstruction of AN Underwater Archaelogical Site: Comparison Between Low Cost Cameras

    Science.gov (United States)

    Capra, A.; Dubbini, M.; Bertacchini, E.; Castagnetti, C.; Mancini, F.

    2015-04-01

    The 3D reconstruction with a metric content of a submerged area, where objects and structures of archaeological interest are found, could play an important role in the research and study activities and even in the digitization of the cultural heritage. The reconstruction of 3D object, of interest for archaeologists, constitutes a starting point in the classification and description of object in digital format and for successive fruition by user after delivering through several media. The starting point is a metric evaluation of the site obtained with photogrammetric surveying and appropriate 3D restitution. The authors have been applying the underwater photogrammetric technique since several years using underwater digital cameras and, in this paper, digital low cost cameras (off-the-shelf). Results of tests made on submerged objects with three cameras are presented: Canon Power Shot G12, Intova Sport HD e GoPro HERO 2. The experimentation had the goal to evaluate the precision in self-calibration procedures, essential for multimedia underwater photogrammetry, and to analyze the quality of 3D restitution. Precisions obtained in the calibration and orientation procedures was assessed by using three cameras, and an homogeneous set control points. Data were processed with Agisoft Photoscan. Successively, 3D models were created and the comparison of the models derived from the use of different cameras was performed. Different potentialities of the used cameras are reported in the discussion section. The 3D restitution of objects and structures was integrated with sea bottom floor morphology in order to achieve a comprehensive description of the site. A possible methodology of survey and representation of submerged objects is therefore illustrated, considering an automatic and a semi-automatic approach.

  7. Crop 3D-a LiDAR based platform for 3D high-throughput crop phenotyping.

    Science.gov (United States)

    Guo, Qinghua; Wu, Fangfang; Pang, Shuxin; Zhao, Xiaoqian; Chen, Linhai; Liu, Jin; Xue, Baolin; Xu, Guangcai; Li, Le; Jing, Haichun; Chu, Chengcai

    2018-03-01

    With the growing population and the reducing arable land, breeding has been considered as an effective way to solve the food crisis. As an important part in breeding, high-throughput phenotyping can accelerate the breeding process effectively. Light detection and ranging (LiDAR) is an active remote sensing technology that is capable of acquiring three-dimensional (3D) data accurately, and has a great potential in crop phenotyping. Given that crop phenotyping based on LiDAR technology is not common in China, we developed a high-throughput crop phenotyping platform, named Crop 3D, which integrated LiDAR sensor, high-resolution camera, thermal camera and hyperspectral imager. Compared with traditional crop phenotyping techniques, Crop 3D can acquire multi-source phenotypic data in the whole crop growing period and extract plant height, plant width, leaf length, leaf width, leaf area, leaf inclination angle and other parameters for plant biology and genomics analysis. In this paper, we described the designs, functions and testing results of the Crop 3D platform, and briefly discussed the potential applications and future development of the platform in phenotyping. We concluded that platforms integrating LiDAR and traditional remote sensing techniques might be the future trend of crop high-throughput phenotyping.

  8. Evaluation of accuracy about 2D vs 3D real-time position management system based on couch rotation when non-coplanar respiratory gated radiation therapy

    International Nuclear Information System (INIS)

    Kwon, Kyung Tae; Kim, Jung Soo; Sim, Hyun Sun; Min, Jung Whan; Son, Soon Yong; Han, Dong Kyoon

    2016-01-01

    Because of non-coplanar therapy with couch rotation in respiratory gated radiation therapy, the recognition of marker movement due to the change in the distance between the infrared camera and the marker due to the rotation of the couch is called RPM (Real-time The purpose of this paper is to evaluate the accuracy of motion reflections (baseline changes) of 2D gating configuration (two dot marker block) and 3D gating configuration (six dot marker block). The motion was measured by varying the couch angle in the clockwise and counterclockwise directions by 10° in the 2D gating configuration. In the 3D gating configuration, the couch angle was changed by 10° in the clockwise direction and compared with the baseline at the reference 0°. The reference amplitude was 1.173 to 1.165, the couch angle at 20° was 1.132, and the couch angle at 1.0° was 1.083. At 350° counterclockwise, the reference amplitude was 1.168 to 1.157, the couch angle at 340° was 1.124, and the couch angle at 330° was 1.079. In this study, the phantom is used to quantitatively evaluate the value of the amplitude according to couch change

  9. Evaluation of accuracy about 2D vs 3D real-time position management system based on couch rotation when non-coplanar respiratory gated radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Kyung Tae; Kim, Jung Soo [Dongnam Health University, Suwon (Korea, Republic of); Sim, Hyun Sun [College of Health Sciences, Korea University, Seoul (Korea, Republic of); Min, Jung Whan [Shingu University College, Sungnam (Korea, Republic of); Son, Soon Yong [Wonkwang Health Science University, Iksan (Korea, Republic of); Han, Dong Kyoon [College of Health Sciences, EulJi University, Daejeon (Korea, Republic of)

    2016-12-15

    Because of non-coplanar therapy with couch rotation in respiratory gated radiation therapy, the recognition of marker movement due to the change in the distance between the infrared camera and the marker due to the rotation of the couch is called RPM (Real-time The purpose of this paper is to evaluate the accuracy of motion reflections (baseline changes) of 2D gating configuration (two dot marker block) and 3D gating configuration (six dot marker block). The motion was measured by varying the couch angle in the clockwise and counterclockwise directions by 10° in the 2D gating configuration. In the 3D gating configuration, the couch angle was changed by 10° in the clockwise direction and compared with the baseline at the reference 0°. The reference amplitude was 1.173 to 1.165, the couch angle at 20° was 1.132, and the couch angle at 1.0° was 1.083. At 350° counterclockwise, the reference amplitude was 1.168 to 1.157, the couch angle at 340° was 1.124, and the couch angle at 330° was 1.079. In this study, the phantom is used to quantitatively evaluate the value of the amplitude according to couch change.

  10. Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall

    NARCIS (Netherlands)

    Bouma, H.; Baan, J.; Landsmeer, S.; Kruszynski, K.J.; Antwerpen, G. van; Dijk, J.

    2013-01-01

    The capability to track individuals in CCTV cameras is important for e.g. surveillance applications at large areas such as train stations, airports and shopping centers. However, it is laborious to track and trace people over multiple cameras. In this paper, we present a system for real-time

  11. Cost and time-effective method for multi-scale measures of rugosity, fractal dimension, and vector dispersion from coral reef 3D models.

    Directory of Open Access Journals (Sweden)

    G C Young

    Full Text Available We present a method to construct and analyse 3D models of underwater scenes using a single cost-effective camera on a standard laptop with (a free or low-cost software, (b no computer programming ability, and (c minimal man hours for both filming and analysis. This study focuses on four key structural complexity metrics: point-to-point distances, linear rugosity (R, fractal dimension (D, and vector dispersion (1/k. We present the first assessment of accuracy and precision of structure-from-motion (SfM 3D models from an uncalibrated GoPro™ camera at a small scale (4 m2 and show that they can provide meaningful, ecologically relevant results. Models had root mean square errors of 1.48 cm in X-Y and 1.35 in Z, and accuracies of 86.8% (R, 99.6% (D at scales 30-60 cm, 93.6% (D at scales 1-5 cm, and 86.9 (1/k. Values of R were compared to in-situ chain-and-tape measurements, while values of D and 1/k were compared with ground truths from 3D printed objects modelled underwater. All metrics varied less than 3% between independently rendered models. We thereby improve and rigorously validate a tool for ecologists to non-invasively quantify coral reef structural complexity with a variety of multi-scale metrics.

  12. A framework for multi-object tracking over distributed wireless camera networks

    Science.gov (United States)

    Gau, Victor; Hwang, Jenq-Neng

    2010-07-01

    In this paper, we propose a unified framework targeting at two important issues in a distributed wireless camera network, i.e., object tracking and network communication, to achieve reliable multi-object tracking over distributed wireless camera networks. In the object tracking part, we propose a fully automated approach for tracking of multiple objects across multiple cameras with overlapping and non-overlapping field of views without initial training. To effectively exchange the tracking information among the distributed cameras, we proposed an idle probability based broadcasting method, iPro, which adaptively adjusts the broadcast probability to improve the broadcast effectiveness in a dense saturated camera network. Experimental results for the multi-object tracking demonstrate the promising performance of our approach on real video sequences for cameras with overlapping and non-overlapping views. The modeling and ns-2 simulation results show that iPro almost approaches the theoretical performance upper bound if cameras are within each other's transmission range. In more general scenarios, e.g., in case of hidden node problems, the simulation results show that iPro significantly outperforms standard IEEE 802.11, especially when the number of competing nodes increases.

  13. Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms.

    Science.gov (United States)

    Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro

    2012-09-10

    We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.

  14. Camera-based speckle noise reduction for 3-D absolute shape measurements.

    Science.gov (United States)

    Zhang, Hao; Kuschmierz, Robert; Czarske, Jürgen; Fischer, Andreas

    2016-05-30

    Simultaneous position and velocity measurements enable absolute 3-D shape measurements of fast rotating objects for instance for monitoring the cutting process in a lathe. Laser Doppler distance sensors enable simultaneous position and velocity measurements with a single sensor head by evaluating the scattered light signals. The superposition of several speckles with equal Doppler frequency but random phase on the photo detector results in an increased velocity and shape uncertainty, however. In this paper, we present a novel image evaluation method that overcomes the uncertainty limitations due to the speckle effect. For this purpose, the scattered light is detected with a camera instead of single photo detectors. Thus, the Doppler frequency from each speckle can be evaluated separately and the velocity uncertainty decreases with the square root of the number of camera lines. A reduction of the velocity uncertainty by the order of one magnitude is verified by the numerical simulations and experimental results, respectively. As a result, the measurement uncertainty of the absolute shape is not limited by the speckle effect anymore.

  15. Development of a compact scintillator-based high-resolution Compton camera for molecular imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kishimoto, A., E-mail: daphne3h-aya@ruri.waseda.jp [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Kataoka, J.; Koide, A.; Sueoka, K.; Iwamoto, Y.; Taya, T. [Research Institute for Science and Engineering, Waseda University, 3-4-1 Ohkubo, Shinjuku, Tokyo (Japan); Ohsuka, S. [Central Research Laboratory, Hamamatsu Photonics K.K., 5000 Hirakuchi, Hamakita-ku, Hamamatsu, Shizuoka (Japan)

    2017-02-11

    The Compton camera, which shows gamma-ray distribution utilizing the kinematics of Compton scattering, is a promising detector capable of imaging across a wide range of energy. In this study, we aim to construct a small-animal molecular imaging system in a wide energy range by using the Compton camera. We developed a compact medical Compton camera based on a Ce-doped Gd{sub 3}Al{sub 2}Ga{sub 3}O{sub 12} (Ce:GAGG) scintillator and multi-pixel photon counter (MPPC). A basic performance confirmed that for 662 keV, the typical energy resolution was 7.4 % (FWHM) and the angular resolution was 4.5° (FWHM). We then used the medical Compton camera to conduct imaging experiments based on a 3-D imaging reconstruction algorithm using the multi-angle data acquisition method. The result confirmed that for a {sup 137}Cs point source at a distance of 4 cm, the image had a spatial resolution of 3.1 mm (FWHM). Furthermore, we succeeded in producing 3-D multi-color image of different simultaneous energy sources ({sup 22}Na [511 keV], {sup 137}Cs [662 keV], and {sup 54}Mn [834 keV]).

  16. A 3D HIDAC-PET camera with sub-millimeter resolution for imaging small animals

    International Nuclear Information System (INIS)

    Jeavons, A.P.; Chandler, R.A.; Dettmar, C.A.R.

    1999-01-01

    A HIDAC-PET camera consisting essentially of 5 million 0.5 mm gas avalanching detectors has been constructed for small-animal imaging. The particular HIDAC advantage--a high 3D spatial resolution--has been improved to 0.95 mm fwhm and to 0.7 mm fwhm when reconstructing with 3D-OSEM methods incorporating resolution recovery. A depth-of-interaction resolution of 2.5 mm is implicit, due to the laminar construction. Scatter-corrected sensitivity, at 8.9 cps/kBq (i.e. 0.9%) from a central point source, or 7.2 cps/kBq (543 cps/kBq/cm 3 ) from a distributed (40 mm diameter, 60 mm long) source is now much higher than previous, and other, work. A field-of-view of 100 mm (adjustable to 200 mm) diameter by 210 mm axially permits whole-body imaging of small animals, containing typically 4MBqs of activity, at 40 kcps of which 16% are random coincidences, with a typical scatter fraction of 44%. Throughout the field-of-view there are no positional distortions and relative quantitation is uniform to ± 3.5%, but some variation of spatial resolution is found. The performance demonstrates that HIDAC technology is quite appropriate for small-animal PET cameras

  17. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    International Nuclear Information System (INIS)

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom; Lee, Jae Young

    2015-01-01

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  18. Video Surveillance using a Multi-Camera Tracking and Fusion System

    OpenAIRE

    Zhang , Zhong; Scanlon , Andrew; Yin , Weihong; Yu , Li; Venetianer , Péter L.

    2008-01-01

    International audience; Usage of intelligent video surveillance (IVS) systems is spreading rapidly. These systems are being utilized in a wide range of applications. In most cases, even in multi-camera installations, the video is processed independently in each feed. This paper describes a system that fuses tracking information from multiple cameras, thus vastly expanding its capabilities. The fusion relies on all cameras being calibrated to a site map, while the individual sensors remain lar...

  19. Imaging of oxygenation in 3D tissue models with multi-modal phosphorescent probes

    Science.gov (United States)

    Papkovsky, Dmitri B.; Dmitriev, Ruslan I.; Borisov, Sergei

    2015-03-01

    Cell-penetrating phosphorescence based probes allow real-time, high-resolution imaging of O2 concentration in respiring cells and 3D tissue models. We have developed a panel of such probes, small molecule and nanoparticle structures, which have different spectral characteristics, cell penetrating and tissue staining behavior. The probes are compatible with conventional live cell imaging platforms and can be used in different detection modalities, including ratiometric intensity and PLIM (Phosphorescence Lifetime IMaging) under one- or two-photon excitation. Analytical performance of these probes and utility of the O2 imaging method have been demonstrated with different types of samples: 2D cell cultures, multi-cellular spheroids from cancer cell lines and primary neurons, excised slices from mouse brain, colon and bladder tissue, and live animals. They are particularly useful for hypoxia research, ex-vivo studies of tissue physiology, cell metabolism, cancer, inflammation, and multiplexing with many conventional fluorophors and markers of cellular function.

  20. 2D array transducers for real-time 3D ultrasound guidance of interventional devices

    Science.gov (United States)

    Light, Edward D.; Smith, Stephen W.

    2009-02-01

    We describe catheter ring arrays for real-time 3D ultrasound guidance of devices such as vascular grafts, heart valves and vena cava filters. We have constructed several prototypes operating at 5 MHz and consisting of 54 elements using the W.L. Gore & Associates, Inc. micro-miniature ribbon cables. We have recently constructed a new transducer using a braided wiring technology from Precision Interconnect. This transducer consists of 54 elements at 4.8 MHz with pitch of 0.20 mm and typical -6 dB bandwidth of 22%. In all cases, the transducer and wiring assembly were integrated with an 11 French catheter of a Cook Medical deployment device for vena cava filters. Preliminary in vivo and in vitro testing is ongoing including simultaneous 3D ultrasound and x-ray fluoroscopy.

  1. A multi-GPU real-time dose simulation software framework for lung radiotherapy.

    Science.gov (United States)

    Santhanam, A P; Min, Y; Neelakkantan, H; Papp, N; Meeks, S L; Kupelian, P A

    2012-09-01

    Medical simulation frameworks facilitate both the preoperative and postoperative analysis of the patient's pathophysical condition. Of particular importance is the simulation of radiation dose delivery for real-time radiotherapy monitoring and retrospective analyses of the patient's treatment. In this paper, a software framework tailored for the development of simulation-based real-time radiation dose monitoring medical applications is discussed. A multi-GPU-based computational framework coupled with inter-process communication methods is introduced for simulating the radiation dose delivery on a deformable 3D volumetric lung model and its real-time visualization. The model deformation and the corresponding dose calculation are allocated among the GPUs in a task-specific manner and is performed in a pipelined manner. Radiation dose calculations are computed on two different GPU hardware architectures. The integration of this computational framework with a front-end software layer and back-end patient database repository is also discussed. Real-time simulation of the dose delivered is achieved at once every 120 ms using the proposed framework. With a linear increase in the number of GPU cores, the computational time of the simulation was linearly decreased. The inter-process communication time also improved with an increase in the hardware memory. Variations in the delivered dose and computational speedup for variations in the data dimensions are investigated using D70 and D90 as well as gEUD as metrics for a set of 14 patients. Computational speed-up increased with an increase in the beam dimensions when compared with a CPU-based commercial software while the error in the dose calculation was lung model-based radiotherapy is an effective tool for performing both real-time and retrospective analyses.

  2. The Boom in 3D-Printed Sensor Technology

    Science.gov (United States)

    Xu, Yuanyuan; Wu, Xiaoyue; Guo, Xiao; Kong, Bin; Zhang, Min; Qian, Xiang; Mi, Shengli; Sun, Wei

    2017-01-01

    Future sensing applications will include high-performance features, such as toxin detection, real-time monitoring of physiological events, advanced diagnostics, and connected feedback. However, such multi-functional sensors require advancements in sensitivity, specificity, and throughput with the simultaneous delivery of multiple detection in a short time. Recent advances in 3D printing and electronics have brought us closer to sensors with multiplex advantages, and additive manufacturing approaches offer a new scope for sensor fabrication. To this end, we review the recent advances in 3D-printed cutting-edge sensors. These achievements demonstrate the successful application of 3D-printing technology in sensor fabrication, and the selected studies deeply explore the potential for creating sensors with higher performance. Further development of multi-process 3D printing is expected to expand future sensor utility and availability. PMID:28534832

  3. A procedure for generating quantitative 3-D camera views of tokamak divertors

    International Nuclear Information System (INIS)

    Edmonds, P.H.; Medley, S.S.

    1996-05-01

    A procedure is described for precision modeling of the views for imaging diagnostics monitoring tokamak internal components, particularly high heat flux divertor components. These models are required to enable predictions of resolution and viewing angle for the available viewing locations. Because of the oblique views expected for slot divertors, fully 3-D perspective imaging is required. A suite of matched 3-D CAD, graphics and animation applications are used to provide a fast and flexible technique for reproducing these views. An analytic calculation of the resolution and viewing incidence angle is developed to validate the results of the modeling procedures. The calculation is applicable to any viewed surface describable with a coordinate array. The Tokamak Physics Experiment (TPX) diagnostics for infrared viewing are used as an example to demonstrate the implementation of the tools. For the TPX experiment the available locations are severely constrained by access limitations at the end resulting images are marginal in both resolution and viewing incidence angle. Full coverage of the divertor is possible if an array of cameras is installed at 45 degree toroidal intervals. Two poloidal locations are required in order to view both the upper and lower divertors. The procedures described here provide a complete design tool for in-vessel viewing, both for camera location and for identification of viewed surfaces. Additionally these same tools can be used for the interpretation of the actual images obtained by the actual diagnostic

  4. A detailed comparison of single-camera light-field PIV and tomographic PIV

    Science.gov (United States)

    Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.

    2018-03-01

    This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.

  5. Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-06-01

    Full Text Available Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i generation of a three-dimensional (3D human model; (ii human object-based automatic scene calibration; and (iii metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.

  6. Using drone-mounted cameras for on-site body documentation: 3D mapping and active survey.

    Science.gov (United States)

    Urbanová, Petra; Jurda, Mikoláš; Vojtíšek, Tomáš; Krajsa, Jan

    2017-12-01

    Recent advances in unmanned aerial technology have substantially lowered the cost associated with aerial imagery. As a result, forensic practitioners are today presented with easy low-cost access to aerial photographs at remote locations. The present paper aims to explore boundaries in which the low-end drone technology can operate as professional crime scene equipment, and to test the prospects of aerial 3D modeling in the forensic context. The study was based on recent forensic cases of falls from height admitted for postmortem examinations. Three mock outdoor forensic scenes featuring a dummy, skeletal remains and artificial blood were constructed at an abandoned quarry and subsequently documented using a commercial DJI Phantom 2 drone equipped with a GoPro HERO 4 digital camera. In two of the experiments, the purpose was to conduct aerial and ground-view photography and to process the acquired images with a photogrammetry protocol (using Agisoft PhotoScan ® 1.2.6) in order to generate 3D textured models. The third experiment tested the employment of drone-based video recordings in mapping scattered body parts. The results show that drone-based aerial photography is capable of producing high-quality images, which are appropriate for building accurate large-scale 3D models of a forensic scene. If, however, high-resolution top-down three-dimensional scene documentation featuring details on a corpse or other physical evidence is required, we recommend building a multi-resolution model by processing aerial and ground-view imagery separately. The video survey showed that using an overview recording for seeking out scattered body parts was efficient. In contrast, the less easy-to-spot evidence, such as bloodstains, was detected only after having been marked properly with crime scene equipment. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Distributed Sensing and Processing for Multi-Camera Networks

    Science.gov (United States)

    Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.

    Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.

  8. Concept of Indoor 3D-Route UAV Scheduling System

    DEFF Research Database (Denmark)

    Khosiawan, Yohanes; Nielsen, Izabela Ewa; Do, Ngoc Ang Dung

    2016-01-01

    environment. On top of that, the multi-source productive best-first-search concept also supports efficient real-time scheduling in response to uncertain events. Without human intervention, the proposed work provides an automatic scheduling system for UAV routing problem in 3D indoor environment....

  9. Person and gesture tracking with smart stereo cameras

    Science.gov (United States)

    Gordon, Gaile; Chen, Xiangrong; Buck, Ron

    2008-02-01

    Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems

  10. The active blind spot camera: hard real-time recognition of moving objects from a moving camera

    OpenAIRE

    Van Beeck, Kristof; Goedemé, Toon; Tuytelaars, Tinne

    2014-01-01

    This PhD research focuses on visual object recognition under specific demanding conditions. The object to be recognized as well as the camera move, and the time available for the recognition task is extremely short. This generic problem is applied here on a specific problem: the active blind spot camera. Statistics show a large number of accidents with trucks are related to the so-called blind spot, the area around the vehicle in which vulnerable road users are hard to perceive by the truck d...

  11. Real-Time Multi-Directional Equipment Site

    Data.gov (United States)

    Federal Laboratory Consortium — As part of the George E. Brown, Jr. Network for Earthquake Engineering Simulation (NEES) Program, Lehigh University has established the Real-Time Multi-Directional...

  12. Introductory review on `Flying Triangulation': a motion-robust optical 3D measurement principle

    Science.gov (United States)

    Ettl, Svenja

    2015-04-01

    'Flying Triangulation' (FlyTri) is a recently developed principle which allows for a motion-robust optical 3D measurement of rough surfaces. It combines a simple sensor with sophisticated algorithms: a single-shot sensor acquires 2D camera images. From each camera image, a 3D profile is generated. The series of 3D profiles generated are aligned to one another by algorithms, without relying on any external tracking device. It delivers real-time feedback of the measurement process which enables an all-around measurement of objects. The principle has great potential for small-space acquisition environments, such as the measurement of the interior of a car, and motion-sensitive measurement tasks, such as the intraoral measurement of teeth. This article gives an overview of the basic ideas and applications of FlyTri. The main challenges and their solutions are discussed. Measurement examples are also given to demonstrate the potential of the measurement principle.

  13. 3D MODELLING OF AN INDOOR SPACE USING A ROTATING STEREO FRAME CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    J. Kang

    2016-06-01

    Full Text Available Sophisticated indoor design and growing development in urban architecture make indoor spaces more complex. And the indoor spaces are easily connected to public transportations such as subway and train stations. These phenomena allow to transfer outdoor activities to the indoor spaces. Constant development of technology has a significant impact on people knowledge about services such as location awareness services in the indoor spaces. Thus, it is required to develop the low-cost system to create the 3D model of the indoor spaces for services based on the indoor models. In this paper, we thus introduce the rotating stereo frame camera system that has two cameras and generate the indoor 3D model using the system. First, select a test site and acquired images eight times during one day with different positions and heights of the system. Measurements were complemented by object control points obtained from a total station. As the data were obtained from the different positions and heights of the system, it was possible to make various combinations of data and choose several suitable combinations for input data. Next, we generated the 3D model of the test site using commercial software with previously chosen input data. The last part of the processes will be to evaluate the accuracy of the generated indoor model from selected input data. In summary, this paper introduces the low-cost system to acquire indoor spatial data and generate the 3D model using images acquired by the system. Through this experiments, we ensure that the introduced system is suitable for generating indoor spatial information. The proposed low-cost system will be applied to indoor services based on the indoor spatial information.

  14. KINECT-BASED REAL-TIME RGB-D IMAGE FUSION METHOD

    Directory of Open Access Journals (Sweden)

    W. Guo

    2012-07-01

    Full Text Available 3D reconstruction of indoor environments based on vision has been developed vigorously. However, the algorithm's complexity and requirements of professional knowledge make it restricted in practical application. With the proposition of the concept of Volunteered Geographic Information (VGI, the traditional method is no longer suitable for VGI. So in this work we utilize consumer depth cameras – Kinect to enable non-expert users to reconstruct 3D model of indoor environment with RGB-D data. Considering the possibility of camera tracking failure we propose a method to perform automatic relocalization.

  15. Design and Development of Multi-Purpose CCD Camera System with Thermoelectric Cooling: Hardware

    Directory of Open Access Journals (Sweden)

    Y.-W. Kang

    2007-12-01

    Full Text Available We designed and developed a multi-purpose CCD camera system for three kinds of CCDs; KAF-0401E(768×512, KAF-1602E(1536×1024, KAF-3200E(2184×1472 made by KODAK Co.. The system supports fast USB port as well as parallel port for data I/O and control signal. The packing is based on two stage circuit boards for size reduction and contains built-in filter wheel. Basic hardware components include clock pattern circuit, A/D conversion circuit, CCD data flow control circuit, and CCD temperature control unit. The CCD temperature can be controlled with accuracy of approximately 0.4° C in the max. range of temperature, Δ 33° C. This CCD camera system has with readout noise 6 e^{-}, and system gain 5 e^{-}/ADU. A total of 10 CCD camera systems were produced and our tests show that all of them show passable performance.

  16. A proposal of decontamination robot using 3D hand-eye-dual-cameras solid recognition and accuracy validation

    International Nuclear Information System (INIS)

    Minami, Mamoru; Nishimura, Kenta; Sunami, Yusuke; Yanou, Akira; Yu, Cui; Yamashita, Manabu; Ishiyama, Shintaro

    2015-01-01

    New robotic system that uses three dimensional measurement with solid object recognition —3D-MOS (Three Dimensional Move on Sensing)— based on visual servoing technology was designed and the on-board hand-eye-dual-cameras robot system has been developed to reduce risks of radiation exposure during decontamination processes by filter press machine that solidifies and reduces the volume of irradiation contaminated soil. The feature of 3D-MoS includes; (1) the both hand-eye-dual-cameras take the images of target object near the intersection of both lenses' centerlines, (2) the observation at intersection enables both cameras can see target object almost at the center of both images, (3) then it brings benefits as reducing the effect of lens aberration and improving the detection accuracy of three dimensional position. In this study, accuracy validation test of interdigitation of the robot's hand into filter cloth rod of the filter press —the task is crucial for the robot to remove the contaminated cloth from the filter press machine automatically and for preventing workers from exposing to radiation—, was performed. Then the following results were derived; (1) the 3D-MoS controlled robot could recognize the rod at arbitrary position within designated space, and all of insertion test were carried out successfully and, (2) test results also demonstrated that the proposed control guarantees that interdigitation clearance between the rod and robot hand can be kept within 1.875[mm] with standard deviation being 0.6[mm] or less. (author)

  17. Holovideo: Real-time 3D range video encoding and decoding on GPU

    Science.gov (United States)

    Karpinsky, Nikolaus; Zhang, Song

    2012-02-01

    We present a 3D video-encoding technique called Holovideo that is capable of encoding high-resolution 3D videos into standard 2D videos, and then decoding the 2D videos back into 3D rapidly without significant loss of quality. Due to the nature of the algorithm, 2D video compression such as JPEG encoding with QuickTime Run Length Encoding (QTRLE) can be applied with little quality loss, resulting in an effective way to store 3D video at very small file sizes. We found that under a compression ratio of 134:1, Holovideo to OBJ file format, the 3D geometry quality drops at a negligible level. Several sets of 3D videos were captured using a structured light scanner, compressed using the Holovideo codec, and then uncompressed and displayed to demonstrate the effectiveness of the codec. With the use of OpenGL Shaders (GLSL), the 3D video codec can encode and decode in realtime. We demonstrated that for a video size of 512×512, the decoding speed is 28 frames per second (FPS) with a laptop computer using an embedded NVIDIA GeForce 9400 m graphics processing unit (GPU). Encoding can be done with this same setup at 18 FPS, making this technology suitable for applications such as interactive 3D video games and 3D video conferencing.

  18. 3D real-time monitoring system for LHD plasma heating experiment

    International Nuclear Information System (INIS)

    Emoto, M.; Narlo, J.; Kaneko, O.; Komori, A.; Iima, M.; Yamaguchi, S.; Sudo, S.

    2001-01-01

    The JAVA-based real-time monitoring system has been in use at the National Institute for Fusion Science, Japan, since the end of March 1988 to maintain stable operations. This system utilizes JAVA technology to realize its platform-independent nature. The main programs are written as JAVA applets and provide human-friendly interfaces. In order to enhance the system's easy-recognition nature, a 3D feature is added. Since most of the system is written mainly in JAVA language, we adopted JAVA3D technology, which was easy to incorporate into the current running systems. With this 3D feature, the operator can more easily find the malfunctioning parts of complex instruments, such as LHD vacuum vessels. This feature is also helpful for recognizing physical phenomena. In this paper, we present an example in which the temperature increases of a vacuum vessel after NBI are visualized

  19. A new omni-directional multi-camera system for high resolution surveillance

    Science.gov (United States)

    Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2014-05-01

    Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.

  20. a Low-Cost Panoramic Camera for the 3d Documentation of Contaminated Crime Scenes

    Science.gov (United States)

    Abate, D.; Toschi, I.; Sturdy-Colls, C.; Remondino, F.

    2017-11-01

    Crime scene documentation is a fundamental task which has to be undertaken in a fast, accurate and reliable way, highlighting evidence which can be further used for ensuring justice for victims and for guaranteeing the successful prosecution of perpetrators. The main focus of this paper is on the documentation of a typical crime scene and on the rapid recording of any possible contamination that could have influenced its original appearance. A 3D reconstruction of the environment is first generated by processing panoramas acquired with the low-cost Ricoh Theta 360 camera, and further analysed to highlight potentials and limits of this emerging and consumer-grade technology. Then, a methodology is proposed for the rapid recording of changes occurring between the original and the contaminated crime scene. The approach is based on an automatic 3D feature-based data registration, followed by a cloud-to-cloud distance computation, given as input the 3D point clouds generated before and after e.g. the misplacement of evidence. All the algorithms adopted for panoramas pre-processing, photogrammetric 3D reconstruction, 3D geometry registration and analysis, are presented and currently available in open-source or low-cost software solutions.

  1. Efficient view based 3-D object retrieval using Hidden Markov Model

    Science.gov (United States)

    Jain, Yogendra Kumar; Singh, Roshan Kumar

    2013-12-01

    Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.

  2. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    Directory of Open Access Journals (Sweden)

    Yu Lu

    2016-04-01

    Full Text Available A new compact large field of view (FOV multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second.

  3. ALE3D: An Arbitrary Lagrangian-Eulerian Multi-Physics Code

    Energy Technology Data Exchange (ETDEWEB)

    Noble, Charles R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Anderson, Andrew T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barton, Nathan R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bramwell, Jamie A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Capps, Arlie [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chang, Michael H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chou, Jin J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dawson, David M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Diana, Emily R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dunn, Timothy A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Faux, Douglas R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fisher, Aaron C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Greene, Patrick T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Heinz, Ines [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kanarska, Yuliya [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Khairallah, Saad A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Liu, Benjamin T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Margraf, Jon D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nichols, Albert L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nourgaliev, Robert N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Puso, Michael A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reus, James F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Robinson, Peter B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shestakov, Alek I. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Solberg, Jerome M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Taller, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tsuji, Paul H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); White, Christopher A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); White, Jeremy L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-23

    ALE3D is a multi-physics numerical simulation software tool utilizing arbitrary-Lagrangian- Eulerian (ALE) techniques. The code is written to address both two-dimensional (2D plane and axisymmetric) and three-dimensional (3D) physics and engineering problems using a hybrid finite element and finite volume formulation to model fluid and elastic-plastic response of materials on an unstructured grid. As shown in Figure 1, ALE3D is a single code that integrates many physical phenomena.

  4. Using 3D spatial correlations to improve the noise robustness of multi component analysis of 3D multi echo quantitative T2 relaxometry data.

    Science.gov (United States)

    Kumar, Dushyant; Hariharan, Hari; Faizy, Tobias D; Borchert, Patrick; Siemonsen, Susanne; Fiehler, Jens; Reddy, Ravinder; Sedlacik, Jan

    2018-05-12

    We present a computationally feasible and iterative multi-voxel spatially regularized algorithm for myelin water fraction (MWF) reconstruction. This method utilizes 3D spatial correlations present in anatomical/pathological tissues and underlying B1 + -inhomogeneity or flip angle inhomogeneity to enhance the noise robustness of the reconstruction while intrinsically accounting for stimulated echo contributions using T2-distribution data alone. Simulated data and in vivo data acquired using 3D non-selective multi-echo spin echo (3DNS-MESE) were used to compare the reconstruction quality of the proposed approach against those of the popular algorithm (the method by Prasloski et al.) and our previously proposed 2D multi-slice spatial regularization spatial regularization approach. We also investigated whether the inter-sequence correlations and agreements improved as a result of the proposed approach. MWF-quantifications from two sequences, 3DNS-MESE vs 3DNS-gradient and spin echo (3DNS-GRASE), were compared for both reconstruction approaches to assess correlations and agreements between inter-sequence MWF-value pairs. MWF values from whole-brain data of six volunteers and two multiple sclerosis patients are being reported as well. In comparison with competing approaches such as Prasloski's method or our previously proposed 2D multi-slice spatial regularization method, the proposed method showed better agreements with simulated truths using regression analyses and Bland-Altman analyses. For 3DNS-MESE data, MWF-maps reconstructed using the proposed algorithm provided better depictions of white matter structures in subcortical areas adjoining gray matter which agreed more closely with corresponding contrasts on T2-weighted images than MWF-maps reconstructed with the method by Prasloski et al. We also achieved a higher level of correlations and agreements between inter-sequence (3DNS-MESE vs 3DNS-GRASE) MWF-value pairs. The proposed algorithm provides more noise

  5. Towards 3C-3D digital holographic fluid velocity vector field measurement—tomographic digital holographic PIV (Tomo-HPIV)

    International Nuclear Information System (INIS)

    Soria, J; Atkinson, C

    2008-01-01

    Most unsteady and/or turbulent flows of geophysical and engineering interest have a highly three-dimensional (3D) complex topology and their experimental investigation is in pressing need of quantitative velocity measurement methods that are robust and can provide instantaneous 3C-3D velocity field data over a significant volumetric domain of the flow. This paper introduces and demonstrates a new method that uses multiple digital CCD array cameras to record in-line digital holograms of the same volume of seed particles from multiple orientations. This technique uses the same basic equipment as Tomo-PIV minus the camera lenses, it overcomes the depth-of-field problem of digital in-line holography and does not require the complex optical calibration of Tomo-PIV. The digital sensors can be oriented in an optimal manner to overcome the depth-of-field limitation of in-line holograms recorded using digital CCD or CMOS array cameras, resulting in a 3D reconstruction of the seed particles within the volume of interest, which can subsequently be analysed using 3D cross-correlation PIV analysis to yield a 3C-3D velocity field. A demonstration experiment of Tomo-HPIV using uniform translation with nominally 11 µm diameter seed particles shows that the 3D displacement derived from 3D cross-correlation Tomo-HPIV analysis can be measured within 5% of the imposed uniform translation, where the imposed uniform translation has an estimated standard uncertainty of 4.3%. So this paper proposes a multi-camera digital holographic imaging 3C-3D PIV method, which is identified as tomographic digital holographic PIV or Tomo-HPIV

  6. Registration of 3D and Multispectral Data for the Study of Cultural Heritage Surfaces

    Science.gov (United States)

    Chane, Camille Simon; Schütze, Rainer; Boochs, Frank; Marzani, Franck S.

    2013-01-01

    We present a technique for the multi-sensor registration of featureless datasets based on the photogrammetric tracking of the acquisition systems in use. This method is developed for the in situ study of cultural heritage objects and is tested by digitizing a small canvas successively with a 3D digitization system and a multispectral camera while simultaneously tracking the acquisition systems with four cameras and using a cubic target frame with a side length of 500 mm. The achieved tracking accuracy is better than 0.03 mm spatially and 0.150 mrad angularly. This allows us to seamlessly register the 3D acquisitions and to project the multispectral acquisitions on the 3D model. PMID:23322103

  7. Characteristics of a multi-image camera on a CT image

    International Nuclear Information System (INIS)

    Mihara, Kazuhiro; Fujino, Tatsuo; Abe, Katsuhito

    1984-01-01

    A multi-imaging camera was used for obtaining a hard-copy image from an imaging device of a CT scanner. The contrast and brightness of the CRT and time exposure of the camera were the three important factors which influenced the quality of the hard-copy image. Two kinds of original test patterns were designed to examine the characteristics of the factors. One was the grayscale test pattern which was used to obtain the density curve. This curve, named the Film-CRT curve, (F-C curve) is to distinguish it from the H D curve. The other was the sharpness test pattern which was used to examine the relationship between brightness and sharpness. As a result, the slope of F-C curve became steeper with a decrease in brightness, with an increase in contrast and with increase in exposure time. Sharpness became worse with an increase in brightness. Therefore, to obtain a good hard copy image, the brightness must be set as dark as possible, and the contrast and exposure time must be controlled after due consideration is given to their characteristics. (author)

  8. Collaborative Multi-Scale 3d City and Infrastructure Modeling and Simulation

    Science.gov (United States)

    Breunig, M.; Borrmann, A.; Rank, E.; Hinz, S.; Kolbe, T.; Schilcher, M.; Mundani, R.-P.; Jubierre, J. R.; Flurl, M.; Thomsen, A.; Donaubauer, A.; Ji, Y.; Urban, S.; Laun, S.; Vilgertshofer, S.; Willenborg, B.; Menninghaus, M.; Steuer, H.; Wursthorn, S.; Leitloff, J.; Al-Doori, M.; Mazroobsemnani, N.

    2017-09-01

    Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future.

  9. COLLABORATIVE MULTI-SCALE 3D CITY AND INFRASTRUCTURE MODELING AND SIMULATION

    Directory of Open Access Journals (Sweden)

    M. Breunig

    2017-09-01

    Full Text Available Computer-aided collaborative and multi-scale 3D planning are challenges for complex railway and subway track infrastructure projects in the built environment. Many legal, economic, environmental, and structural requirements have to be taken into account. The stringent use of 3D models in the different phases of the planning process facilitates communication and collaboration between the stake holders such as civil engineers, geological engineers, and decision makers. This paper presents concepts, developments, and experiences gained by an interdisciplinary research group coming from civil engineering informatics and geo-informatics banding together skills of both, the Building Information Modeling and the 3D GIS world. New approaches including the development of a collaborative platform and 3D multi-scale modelling are proposed for collaborative planning and simulation to improve the digital 3D planning of subway tracks and other infrastructures. Experiences during this research and lessons learned are presented as well as an outlook on future research focusing on Building Information Modeling and 3D GIS applications for cities of the future.

  10. Real-time gaze estimation via pupil center tracking

    Directory of Open Access Journals (Sweden)

    Cazzato Dario

    2018-02-01

    Full Text Available Automatic gaze estimation not based on commercial and expensive eye tracking hardware solutions can enable several applications in the fields of human computer interaction (HCI and human behavior analysis. It is therefore not surprising that several related techniques and methods have been investigated in recent years. However, very few camera-based systems proposed in the literature are both real-time and robust. In this work, we propose a real-time user-calibration-free gaze estimation system that does not need person-dependent calibration, can deal with illumination changes and head pose variations, and can work with a wide range of distances from the camera. Our solution is based on a 3-D appearance-based method that processes the images from a built-in laptop camera. Real-time performance is obtained by combining head pose information with geometrical eye features to train a machine learning algorithm. Our method has been validated on a data set of images of users in natural environments, and shows promising results. The possibility of a real-time implementation, combined with the good quality of gaze tracking, make this system suitable for various HCI applications.

  11. Diffusion of πsup(+-) mesons on D, 3He, 4He with a streamer self-shunted camera in a magnetic field

    International Nuclear Information System (INIS)

    Atanasov, A.; Angelescu, T.; Balea, O.; Balestra, F.; Busso, L.; Garfagnini, R.

    1975-01-01

    A streamer self-shunted camera has been developed with the study of interactions of πsup(+-) mesons on D, 3 He, 4 He. The camera can operate at high beam currents (10 sup(5)/10sup(6)ssup(-1)), so processes with small cross sections can be studied

  12. A Spatial Reference Grid for Real-Time Autonomous Underwater Modeling using 3-D Sonar

    Energy Technology Data Exchange (ETDEWEB)

    Auran, P.G.

    1996-12-31

    The offshore industry has recognized the need for intelligent underwater robotic vehicles. This doctoral thesis deals with autonomous underwater vehicles (AUVs) and concentrates on a data representation for real-time image formation and analysis. Its main objective is to develop a 3-D image representation suitable for autonomous perception objectives underwater, assuming active sonar as the main sensor for perception. The main contributions are: (1) A dynamical image representation for 3-D range data, (2) A basic electronic circuit and software system for 3-D sonar sampling and amplitude thresholding, (3) A model for target reliability, (4) An efficient connected components algorithm for 3-D segmentation, (5) A method for extracting general 3-D geometrical representations from segmented echo clusters, (6) Experimental results of planar and curved target modeling. 142 refs., 120 figs., 10 tabs.

  13. Realtime Reconstruction of an Animating Human Body from a Single Depth Camera.

    Science.gov (United States)

    Chen, Yin; Cheng, Zhi-Quan; Lai, Chao; Martin, Ralph R; Dang, Gang

    2016-08-01

    We present a method for realtime reconstruction of an animating human body,which produces a sequence of deforming meshes representing a given performance captured by a single commodity depth camera. We achieve realtime single-view mesh completion by enhancing the parameterized SCAPE model.Our method, which we call Realtime SCAPE, performs full-body reconstruction without the use of markers.In Realtime SCAPE, estimations of body shape parameters and pose parameters, needed for reconstruction, are decoupled. Intrinsic body shape is first precomputed for a given subject, by determining shape parameters with the aid of a body shape database. Subsequently, per-frame pose parameter estimation is performed by means of linear blending skinning (LBS); the problem is decomposed into separately finding skinning weights and transformations. The skinning weights are also determined offline from the body shape database,reducing online reconstruction to simply finding the transformations in LBS. Doing so is formulated as a linear variational problem;carefully designed constraints are used to impose temporal coherence and alleviate artifacts. Experiments demonstrate that our method can produce full-body mesh sequences with high fidelity.

  14. Oblique Multi-Camera Systems - Orientation and Dense Matching Issues

    Science.gov (United States)

    Rupnik, E.; Nex, F.; Remondino, F.

    2014-03-01

    The use of oblique imagery has become a standard for many civil and mapping applications, thanks to the development of airborne digital multi-camera systems, as proposed by many companies (Blomoblique, IGI, Leica, Midas, Pictometry, Vexcel/Microsoft, VisionMap, etc.). The indisputable virtue of oblique photography lies in its simplicity of interpretation and understanding for inexperienced users allowing their use of oblique images in very different applications, such as building detection and reconstruction, building structural damage classification, road land updating and administration services, etc. The paper reports an overview of the actual oblique commercial systems and presents a workflow for the automated orientation and dense matching of large image blocks. Perspectives, potentialities, pitfalls and suggestions for achieving satisfactory results are given. Tests performed on two datasets acquired with two multi-camera systems over urban areas are also reported.

  15. Real-time construction and visualisation of drift-free video mosaics from unconstrained camera motion

    Directory of Open Access Journals (Sweden)

    Mateusz Brzeszcz

    2015-08-01

    Full Text Available This work proposes a novel approach for real-time video mosaicking facilitating drift-free mosaic construction and visualisation, with integrated frame blending and redundancy management, that is shown to be flexible to a range of varying mosaic scenarios. The approach supports unconstrained camera motion with in-sequence loop closing, variation in camera focal distance (zoom and recovery from video sequence breaks. Real-time performance, over extended duration sequences, is realised via novel aspects of frame management within the mosaic representation and thus avoiding the high data redundancy associated with temporally dense, spatially overlapping video frame inputs. This managed set of image frames is visualised in real time using a dynamic mosaic representation of overlapping textured graphics primitives in place of the traditional globally constructed, and hence frequently reconstructed, mosaic image. Within this formulation, subsequent optimisation occurring during online construction can thus efficiency adjust relative frame positions via simple primitive position transforms. Effective visualisation is similarly facilitated by online inter-frame blending to overcome the illumination and colour variance associated with modern camera hardware. The evaluation illustrates overall robustness in video mosaic construction under a diverse range of conditions including indoor and outdoor environments, varying illumination and presence of in-scene motion on varying computational platforms.

  16. ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras

    OpenAIRE

    Mur-Artal, Raul; Tardos, Juan D.

    2016-01-01

    We present ORB-SLAM2 a complete SLAM system for monocular, stereo and RGB-D cameras, including map reuse, loop closing and relocalization capabilities. The system works in real-time on standard CPUs in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end based on bundle adjustment with monocular and stereo observations allows for accurate trajectory estimation with metric scale. Our syst...

  17. Real-time tracking and fast retrieval of persons in multiple surveillance cameras of a shopping mall

    Science.gov (United States)

    Bouma, Henri; Baan, Jan; Landsmeer, Sander; Kruszynski, Chris; van Antwerpen, Gert; Dijk, Judith

    2013-05-01

    The capability to track individuals in CCTV cameras is important for e.g. surveillance applications at large areas such as train stations, airports and shopping centers. However, it is laborious to track and trace people over multiple cameras. In this paper, we present a system for real-time tracking and fast interactive retrieval of persons in video streams from multiple static surveillance cameras. This system is demonstrated in a shopping mall, where the cameras are positioned without overlapping fields-of-view and have different lighting conditions. The results show that the system allows an operator to find the origin or destination of a person more efficiently. The misses are reduced with 37%, which is a significant improvement.

  18. Integrated optical 3D digital imaging based on DSP scheme

    Science.gov (United States)

    Wang, Xiaodong; Peng, Xiang; Gao, Bruce Z.

    2008-03-01

    We present a scheme of integrated optical 3-D digital imaging (IO3DI) based on digital signal processor (DSP), which can acquire range images independently without PC support. This scheme is based on a parallel hardware structure with aid of DSP and field programmable gate array (FPGA) to realize 3-D imaging. In this integrated scheme of 3-D imaging, the phase measurement profilometry is adopted. To realize the pipeline processing of the fringe projection, image acquisition and fringe pattern analysis, we present a multi-threads application program that is developed under the environment of DSP/BIOS RTOS (real-time operating system). Since RTOS provides a preemptive kernel and powerful configuration tool, with which we are able to achieve a real-time scheduling and synchronization. To accelerate automatic fringe analysis and phase unwrapping, we make use of the technique of software optimization. The proposed scheme can reach a performance of 39.5 f/s (frames per second), so it may well fit into real-time fringe-pattern analysis and can implement fast 3-D imaging. Experiment results are also presented to show the validity of proposed scheme.

  19. 3D DATA ACQUISITION BASED ON OPENCV FOR CLOSE-RANGE PHOTOGRAMMETRY APPLICATIONS

    Directory of Open Access Journals (Sweden)

    L. Jurjević

    2017-05-01

    Full Text Available Development of the technology in the area of the cameras, computers and algorithms for 3D the reconstruction of the objects from the images resulted in the increased popularity of the photogrammetry. Algorithms for the 3D model reconstruction are so advanced that almost anyone can make a 3D model of photographed object. The main goal of this paper is to examine the possibility of obtaining 3D data for the purposes of the close-range photogrammetry applications, based on the open source technologies. All steps of obtaining 3D point cloud are covered in this paper. Special attention is given to the camera calibration, for which two-step process of calibration is used. Both, presented algorithm and accuracy of the point cloud are tested by calculating the spatial difference between referent and produced point clouds. During algorithm testing, robustness and swiftness of obtaining 3D data is noted, and certainly usage of this and similar algorithms has a lot of potential in the real-time application. That is the reason why this research can find its application in the architecture, spatial planning, protection of cultural heritage, forensic, mechanical engineering, traffic management, medicine and other sciences.

  20. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    Science.gov (United States)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  1. Comparison of Multi-shot Models for Short-term Re-identification of People using RGB-D Sensors

    DEFF Research Database (Denmark)

    Møgelmose, Andreas; Bahnsen, Chris; Moeslund, Thomas B.

    2015-01-01

    This work explores different types of multi-shot descriptors for re-identification in an on-the-fly enrolled environment using RGB-D sensors. We present a full re-identification pipeline complete with detection, segmentation, feature extraction, and re-identification, which expands on previous work...... by using multi-shot descriptors modeling people over a full camera pass instead of single frames with no temporal linking. We compare two different multi-shot models; mean histogram and histogram series, and test them each in 3 different color spaces. Both histogram descriptors are assisted by a depth...

  2. Study of the feasibility of a compact gamma camera for real-time cancer assessment

    CERN Document Server

    Caballero Ontanaya, Luis

    2017-01-01

    Results from the simulations of a Compton gamma camera based on compact configuration of detectors consisting in two detection modules, each of them having two stages of high-resolution position- and energy sensitive radiation detectors operated in time-coincidence are presented. Monolithic scintillation crystals instead of pixelated crystals in order to reduce dead areas have been simulated. In order to study the system feasibility to produce real-time images, different setups are considered. Performance in terms of acquisition times have been calculated to determine the real-time capabilities and limitations of such a system.

  3. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    Directory of Open Access Journals (Sweden)

    Richard Chiou

    2010-06-01

    Full Text Available This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote controlling of the robots. The uniqueness of the project lies in making this process Internet-based, and remote robot operated and visualized in 3D. This 3D system approach provides the students with a more realistic feel of the 3D robotic laboratory even though they are working remotely. As a result, the 3D visualization technology has been tested as part of a laboratory in the MET 205 Robotics and Mechatronics class and has received positive feedback by most of the students. This type of research has introduced a new level of realism and visual communications to online laboratory learning in a remote classroom.

  4. Network Support for Social 3-D Immersive Tele-Presence with Highly Realistic Natural and Synthetic Avatar Users

    NARCIS (Netherlands)

    R.N. Mekuria (Rufael); A. Frisiello (Antonella); M Pasin (Marco); P.S. Cesar Garcia (Pablo Santiago)

    2015-01-01

    htmlabstractThe next generation in 3D tele-presence is based on modular systems that combine live captured object based 3D video and synthetically authored 3D graphics content. This paper presents the design, implementation and evaluation of a network solution for multi-party real-time communication

  5. VirtoScan - a mobile, low-cost photogrammetry setup for fast post-mortem 3D full-body documentations in x-ray computed tomography and autopsy suites.

    Science.gov (United States)

    Kottner, Sören; Ebert, Lars C; Ampanozi, Garyfalia; Braun, Marcel; Thali, Michael J; Gascho, Dominic

    2017-03-01

    Injuries such as bite marks or boot prints can leave distinct patterns on the body's surface and can be used for 3D reconstructions. Although various systems for 3D surface imaging have been introduced in the forensic field, most techniques are both cost-intensive and time-consuming. In this article, we present the VirtoScan, a mobile, multi-camera rig based on close-range photogrammetry. The system can be integrated into automated PMCT scanning procedures or used manually together with lifting carts, autopsy tables and examination couch. The VirtoScan is based on a moveable frame that carries 7 digital single-lens reflex cameras. A remote control is attached to each camera and allows the simultaneous triggering of the shutter release of all cameras. Data acquisition in combination with the PMCT scanning procedures took 3:34 min for the 3D surface documentation of one side of the body compared to 20:20 min of acquisition time when using our in-house standard. A surface model comparison between the high resolution output from our in-house standard and a high resolution model from the multi-camera rig showed a mean surface deviation of 0.36 mm for the whole body scan and 0.13 mm for a second comparison of a detailed section of the scan. The use of the multi-camera rig reduces the acquisition time for whole-body surface documentations in medico-legal examinations and provides a low-cost 3D surface scanning alternative for forensic investigations.

  6. Performance of Hayabusa2 DCAM3-D Camera for Short-Range Imaging of SCI and Ejecta Curtain Generated from the Artificial Impact Crater Formed on Asteroid 162137 Ryugu (1999 JU3)

    Science.gov (United States)

    Ishibashi, K.; Shirai, K.; Ogawa, K.; Wada, K.; Honda, R.; Arakawa, M.; Sakatani, N.; Ikeda, Y.

    2017-07-01

    Deployable Camera 3-D (DCAM3-D) is a small high-resolution camera equipped on Deployable Camera 3 (DCAM3), one of the Hayabusa2 instruments. Hayabusa2 will explore asteroid 162137 Ryugu (1999 JU3) and conduct an impact experiment using a liner shooting device called Small Carry-on Impactor (SCI). DCAM3 will be detached from the Hayabusa2 spacecraft and observe the impact experiment. The purposes of the observation are to know the impact conditions, to estimate the surface structure of asteroid Ryugu, and to understand the physics of impact phenomena on low-gravity bodies. DCAM3-D requires high imaging performance because it has to image and detect multiple targets of different scale and radiance, i.e., the faint SCI before the shot from 1-km distance, the bright ejecta generated by the impact, and the asteroid. In this paper we report the evaluation of the performance of the CMOS imaging sensor and the optical system of DCAM3-D. We also describe the calibration of DCAM3-D. We confirmed that the imaging performance of DCAM3-D satisfies the required values to achieve the purposes of the observation.

  7. ARP1400 DVI break analysis using the MARS 3.1 multi-D component

    International Nuclear Information System (INIS)

    Hwang, Moon-Kyu; Lim, Hong-Sik; Lee, Seung-Wook; Bae, Sung-Won; Chung, Bub-Dong

    2006-01-01

    The current version of MARS 3.1 has a multi-D component intended to simulate an asymmetric multidimensional fluid behavior in a reactor core, downcomer or in a steam generator, in a more realistic manner. The feature is implemented in the 1-D module of the code. As opposed to the cross flow junction modeling, the multi-D component allows for a lateral momentum transfer as well as a sheer stress. Thus, a full three-Dimensional analysis capability is available as in the case of RELAP5-3D or CATHARE. In this study the multi-D component is applied to the hypothetical accident of a DVI (Direct Vessel Injection) break in the APR1400 plant, and the results are analyzed

  8. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Uk [Samsung Electroics, Suwon (Korea, Republic of); Sun, Ju Young; Won, Mooncheol [Chungnam Nat' l Univ., Daejeon (Korea, Republic of)

    2013-12-15

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner.

  9. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    International Nuclear Information System (INIS)

    Lee, Jung Uk; Sun, Ju Young; Won, Mooncheol

    2013-01-01

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner

  10. First 3D Cadastral Registration of Multi-level Ownerships Rights in the Netherlands

    NARCIS (Netherlands)

    Ploeger, H.D.; Stoter, J.E.; Roes, R; Van der Riet, E.; Biljecki, F.; Ledoux, H.

    2016-01-01

    This paper reports on the first 3D cadastral registration of multi-level ownerships rights in the Netherlands, which was accomplished in March 2016. It is the result of a study that was undertaken from 2013 to 2015 to determine how insight about multi-level ownership can be provided in 3D by the

  11. Advanced real-time multi-display educational system (ARMES): An innovative real-time audiovisual mentoring tool for complex robotic surgery.

    Science.gov (United States)

    Lee, Joong Ho; Tanaka, Eiji; Woo, Yanghee; Ali, Güner; Son, Taeil; Kim, Hyoung-Il; Hyung, Woo Jin

    2017-12-01

    The recent scientific and technologic advances have profoundly affected the training of surgeons worldwide. We describe a novel intraoperative real-time training module, the Advanced Robotic Multi-display Educational System (ARMES). We created a real-time training module, which can provide a standardized step by step guidance to robotic distal subtotal gastrectomy with D2 lymphadenectomy procedures, ARMES. The short video clips of 20 key steps in the standardized procedure for robotic gastrectomy were created and integrated with TilePro™ software to delivery on da Vinci Surgical Systems (Intuitive Surgical, Sunnyvale, CA). We successfully performed the robotic distal subtotal gastrectomy with D2 lymphadenectomy for patient with gastric cancer employing this new teaching method without any transfer errors or system failures. Using this technique, the total operative time was 197 min and blood loss was 50 mL and there were no intra- or post-operative complications. Our innovative real-time mentoring module, ARMES, enables standardized, systematic guidance during surgical procedures. © 2017 Wiley Periodicals, Inc.

  12. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    OpenAIRE

    Richard Chiou; Yongjin (james) Kwon; Tzu-Liang (bill) Tseng; Robin Kizirian; Yueh-Ting Yang

    2010-01-01

    This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote c...

  13. An experiment of a 3D real-time robust visual odometry for intelligent vehicles

    OpenAIRE

    Rodriguez Florez , Sergio Alberto; Fremont , Vincent; Bonnifait , Philippe

    2009-01-01

    International audience; Vision systems are nowadays very promising for many on-board vehicles perception functionalities, like obstacles detection/recognition and ego-localization. In this paper, we present a 3D visual odometric method that uses a stereo-vision system to estimate the 3D ego-motion of a vehicle in outdoor road conditions. In order to run in real-time, the studied technique is sparse meaning that it makes use of feature points that are tracked during several frames. A robust sc...

  14. An automated device for the digitization and 3D modelling of insects, combining extended-depth-of-field and all-side multi-view imaging

    Directory of Open Access Journals (Sweden)

    Bernhard Ströbel

    2018-05-01

    Full Text Available Digitization of natural history collections is a major challenge in archiving biodiversity. In recent years, several approaches have emerged, allowing either automated digitization, extended depth of field (EDOF or multi-view imaging of insects. Here, we present DISC3D: a new digitization device for pinned insects and other small objects that combines all these aspects. A PC and a microcontroller board control the device. It features a sample holder on a motorized two-axis gimbal, allowing the specimens to be imaged from virtually any view. Ambient, mostly reflection-free illumination is ascertained by two LED-stripes circularly installed in two hemispherical white-coated domes (front-light and back-light. The device is equipped with an industrial camera and a compact macro lens, mounted on a motorized macro rail. EDOF images are calculated from an image stack using a novel calibrated scaling algorithm that meets the requirements of the pinhole camera model (a unique central perspective. The images can be used to generate a calibrated and real color texturized 3Dmodel by ‘structure from motion’ with a visibility consistent mesh generation. Such models are ideal for obtaining morphometric measurement data in 1D, 2D and 3D, thereby opening new opportunities for trait-based research in taxonomy, phylogeny, eco-physiology, and functional ecology.

  15. A flexible new method for 3D measurement based on multi-view image sequences

    Science.gov (United States)

    Cui, Haihua; Zhao, Zhimin; Cheng, Xiaosheng; Guo, Changye; Jia, Huayu

    2016-11-01

    Three-dimensional measurement is the base part for reverse engineering. The paper developed a new flexible and fast optical measurement method based on multi-view geometry theory. At first, feature points are detected and matched with improved SIFT algorithm. The Hellinger Kernel is used to estimate the histogram distance instead of traditional Euclidean distance, which is immunity to the weak texture image; then a new filter three-principle for filtering the calculation of essential matrix is designed, the essential matrix is calculated using the improved a Contrario Ransac filter method. One view point cloud is constructed accurately with two view images; after this, the overlapped features are used to eliminate the accumulated errors caused by added view images, which improved the camera's position precision. At last, the method is verified with the application of dental restoration CAD/CAM, experiment results show that the proposed method is fast, accurate and flexible for tooth 3D measurement.

  16. 3D shape measurement for moving scenes using an interlaced scanning colour camera

    International Nuclear Information System (INIS)

    Cao, Senpeng; Cao, Yiping; Lu, Mingteng; Zhang, Qican

    2014-01-01

    A Fourier transform deinterlacing algorithm (FTDA) is proposed to eliminate the blurring and dislocation of the fringe patterns on a moving object captured by an interlaced scanning colour camera in phase measuring profilometry (PMP). Every frame greyscale fringe from three colour channels of every colour fringe is divided into even and odd field fringes respectively, each of which is respectively processed by FTDA. All of the six frames deinterlaced fringes from one colour fringe form two sets of three-step phase-shifted greyscale fringes, with which two 3D shapes corresponding to two different moments are reconstructed by PMP within a frame period. The deinterlaced fringe is identical with the exact frame fringe at the same moment theoretically. The simulation and experiments show its feasibility and validity. The method doubles the time resolution, maintains the precision of the traditional phase measurement profilometry, and has potential applications in the moving and online object’s 3D shape measurements. (paper)

  17. Camera pose estimation for augmented reality in a small indoor dynamic scene

    Science.gov (United States)

    Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad

    2017-09-01

    Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.

  18. Mixing in 3D Sparse Multi-Scale Grid Generated Turbulence

    Science.gov (United States)

    Usama, Syed; Kopec, Jacek; Tellez, Jackson; Kwiatkowski, Kamil; Redondo, Jose; Malik, Nadeem

    2017-04-01

    Flat 2D fractal grids are known to alter turbulence characteristics downstream of the grid as compared to the regular grids with the same blockage ratio and the same mass inflow rates [1]. This has excited interest in the turbulence community for possible exploitation for enhanced mixing and related applications. Recently, a new 3D multi-scale grid design has been proposed [2] such that each generation of length scale of turbulence grid elements is held in its own frame, the overall effect is a 3D co-planar arrangement of grid elements. This produces a 'sparse' grid system whereby each generation of grid elements produces a turbulent wake pattern that interacts with the other wake patterns downstream. A critical motivation here is that the effective blockage ratio in the 3D Sparse Grid Turbulence (3DSGT) design is significantly lower than in the flat 2D counterpart - typically the blockage ratio could be reduced from say 20% in 2D down to 4% in the 3DSGT. If this idea can be realized in practice, it could potentially greatly enhance the efficiency of turbulent mixing and transfer processes clearly having many possible applications. Work has begun on the 3DSGT experimentally using Surface Flow Image Velocimetry (SFIV) [3] at the European facility in the Max Planck Institute for Dynamics and Self-Organization located in Gottingen, Germany and also at the Technical University of Catalonia (UPC) in Spain, and numerically using Direct Numerical Simulation (DNS) at King Fahd University of Petroleum & Minerals (KFUPM) in Saudi Arabia and in University of Warsaw in Poland. DNS is the most useful method to compare the experimental results with, and we are studying different types of codes such as Imcompact3d, and OpenFoam. Many variables will eventually be investigated for optimal mixing conditions. For example, the number of scale generations, the spacing between frames, the size ratio of grid elements, inflow conditions, etc. We will report upon the first set of findings

  19. Wide area 2D/3D imaging development, analysis and applications

    CERN Document Server

    Langmann, Benjamin

    2014-01-01

    Imaging technology is an important research area and it is widely utilized in a growing number of disciplines ranging from gaming, robotics and automation to medicine. In the last decade 3D imaging became popular mainly driven by the introduction of novel 3D cameras and measuring devices. These cameras are usually limited to indoor scenes with relatively low distances. Benjamin Langmann introduces medium and long-range 2D/3D cameras to overcome these limitations. He reports measurement results for these devices and studies their characteristic behavior. In order to facilitate the application o

  20. 3D Defect Localization on Exothermic Faults within Multi-Layered Structures Using Lock-In Thermography: An Experimental and Numerical Approach.

    Science.gov (United States)

    Bae, Ji Yong; Lee, Kye-Sung; Hur, Hwan; Nam, Ki-Hwan; Hong, Suk-Ju; Lee, Ah-Yeong; Chang, Ki Soo; Kim, Geon-Hee; Kim, Ghiseok

    2017-10-13

    Micro-electronic devices are increasingly incorporating miniature multi-layered integrated architectures. However, the localization of faults in three-dimensional structure remains challenging. This study involved the experimental and numerical estimation of the depth of a thermally active heating source buried in multi-layered silicon wafer architecture by using both phase information from an infrared microscopy and finite element simulation. Infrared images were acquired and real-time processed by a lock-in method. It is well known that the lock-in method can increasingly improve detection performance by enhancing the spatial and thermal resolution of measurements. Operational principle of the lock-in method is discussed, and it is represented that phase shift of the thermal emission from a silicon wafer stacked heat source chip (SSHSC) specimen can provide good metrics for the depth of the heat source buried in SSHSCs. Depth was also estimated by analyzing the transient thermal responses using the coupled electro-thermal simulations. Furthermore, the effects of the volumetric heat source configuration mimicking the 3D through silicon via integration package were investigated. Both the infrared microscopic imaging with the lock-in method and FE simulation were potentially useful for 3D isolation of exothermic faults and their depth estimation for multi-layered structures, especially in packaged semiconductors.

  1. Real-time three-dimensional soft tissue reconstruction for laparoscopic surgery.

    Science.gov (United States)

    Kowalczuk, Jędrzej; Meyer, Avishai; Carlson, Jay; Psota, Eric T; Buettner, Shelby; Pérez, Lance C; Farritor, Shane M; Oleynikov, Dmitry

    2012-12-01

    Accurate real-time 3D models of the operating field have the potential to enable augmented reality for endoscopic surgery. A new system is proposed to create real-time 3D models of the operating field that uses a custom miniaturized stereoscopic video camera attached to a laparoscope and an image-based reconstruction algorithm implemented on a graphics processing unit (GPU). The proposed system was evaluated in a porcine model that approximates the viewing conditions of in vivo surgery. To assess the quality of the models, a synthetic view of the operating field was produced by overlaying a color image on the reconstructed 3D model, and an image rendered from the 3D model was compared with a 2D image captured from the same view. Experiments conducted with an object of known geometry demonstrate that the system produces 3D models accurate to within 1.5 mm. The ability to produce accurate real-time 3D models of the operating field is a significant advancement toward augmented reality in minimally invasive surgery. An imaging system with this capability will potentially transform surgery by helping novice and expert surgeons alike to delineate variance in internal anatomy accurately.

  2. 3D Rainbow Particle Tracking Velocimetry

    Science.gov (United States)

    Aguirre-Pablo, Andres A.; Xiong, Jinhui; Idoughi, Ramzi; Aljedaani, Abdulrahman B.; Dun, Xiong; Fu, Qiang; Thoroddsen, Sigurdur T.; Heidrich, Wolfgang

    2017-11-01

    A single color camera is used to reconstruct a 3D-3C velocity flow field. The camera is used to record the 2D (X,Y) position and colored scattered light intensity (Z) from white polyethylene tracer particles in a flow. The main advantage of using a color camera is the capability of combining different intensity levels for each color channel to obtain more depth levels. The illumination system consists of an LCD projector placed perpendicularly to the camera. Different intensity colored level gradients are projected onto the particles to encode the depth position (Z) information of each particle, benefiting from the possibility of varying the color profiles and projected frequencies up to 60 Hz. Chromatic aberrations and distortions are estimated and corrected using a 3D laser engraved calibration target. The camera-projector system characterization is presented considering size and depth position of the particles. The use of these components reduces dramatically the cost and complexity of traditional 3D-PTV systems.

  3. Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm

    Science.gov (United States)

    Lahamy, H.; Lichti, D.

    2011-09-01

    Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.

  4. Development of a tomographic system adapted to 3D measurement of contaminated wounds based on the Cacao concept (Computer aided collimation Gamma Camera)

    International Nuclear Information System (INIS)

    Douiri, A.

    2002-03-01

    The computer aided collimation gamma camera (CACAO in French) is a gamma camera using a collimator with large holes, a supplementary linear scanning motion during the acquisition and a dedicated reconstruction program taking full account of the source depth. The CACAO system was introduced to improve both the sensitivity and the resolution in nuclear medicine. This thesis focuses on the design of a fast and robust reconstruction algorithm in the CACAO project. We start by an overview of tomographic imaging techniques in nuclear medicine. After modelling the physical CACAO system, we present the complete reconstruction program which involves three steps: 1) shift and sum 2) deconvolution and filtering 3) rotation and sum. The deconvolution is the critical step that decreases the signal to noise ratio of the reconstructed images. We propose a regularized multi-channel algorithm to solve the deconvolution problem. We also present a fast algorithm based on Splines functions and preserving the high quality of the reconstructed images for the shift and the rotation steps. Comparisons of simulated reconstructed images in 2D and 3D for the conventional system (CPHC) and CACAO demonstrate the ability of CACAO system to increase the quality of the SPECT images. Finally, this study concludes with an experimental approach with a pixellated detector conceived for a 3D measurement of contaminated wounds. This experimentation proves the possible advantages of coupling the CACAO project with pixellated detectors. Moreover, a variety of applications could fully benefit from the CACAO system, such as low activity imaging, the use of high-energy gamma isotopes and the visualization of deep organs. Moreover the combination of the CACAO system with a pixels detector may open up further possibilities for the future of nuclear medicine. (author)

  5. Optimum color filters for CCD digital cameras

    Science.gov (United States)

    Engelhardt, Kai; Kunz, Rino E.; Seitz, Peter; Brunner, Harald; Knop, Karl

    1993-12-01

    As part of the ESPRIT II project No. 2103 (MASCOT) a high performance prototype color CCD still video camera was developed. Intended for professional usage such as in the graphic arts, the camera provides a maximum resolution of 3k X 3k full color pixels. A high colorimetric performance was achieved through specially designed dielectric filters and optimized matrixing. The color transformation was obtained by computer simulation of the camera system and non-linear optimization which minimized the perceivable color errors as measured in the 1976 CIELUV uniform color space for a set of about 200 carefully selected test colors. The color filters were designed to allow perfect colorimetric reproduction in principle and at the same time with imperceptible color noise and with special attention to fabrication tolerances. The camera system includes a special real-time digital color processor which carries out the color transformation. The transformation can be selected from a set of sixteen matrices optimized for different illuminants and output devices. Because the actual filter design was based on slightly incorrect data the prototype camera showed a mean colorimetric error of 2.7 j.n.d. (CIELUV) in experiments. Using correct input data in the redesign of the filters, a mean colorimetric error of only 1 j.n.d. (CIELUV) seems to be feasible, implying that it is possible with such an optimized color camera to achieve such a high colorimetric performance that the reproduced colors in an image cannot be distinguished from the original colors in a scene, even in direct comparison.

  6. Esophagogastric Junction pressure morphology: comparison between a station pull-through and real-time 3D-HRM representation.

    Science.gov (United States)

    Nicodème, F; Lin, Z; Pandolfino, J E; Kahrilas, P J

    2013-09-01

    Esophagogastric junction (EGJ) competence is the fundamental defense against reflux making it of great clinical significance. However, characterizing EGJ competence with conventional manometric methodologies has been confounded by its anatomic and physiological complexity. Recent technological advances in miniaturization and electronics have led to the development of a novel device that may overcome these challenges. Nine volunteer subjects were studied with a novel 3D-HRM device providing 7.5 mm axial and 45° radial pressure resolution within the EGJ. Real-time measurements were made at rest and compared to simulations of a conventional pull-through made with the same device. Moreover, 3D-HRM recordings were analyzed to differentiate contributing pressure signals within the EGJ attributable to lower esophageal sphincter (LES), diaphragm, and vasculature. 3D-HRM recordings suggested that sphincter length assessed by a pull-through method greatly exaggerated the estimate of LES length by failing to discriminate among circumferential contractile pressure and asymmetric extrinsic pressure signals attributable to diaphragmatic and vascular structures. Real-time 3D EGJ recordings found that the dominant constituents of EGJ pressure at rest were attributable to the diaphragm. 3D-HRM permits real-time recording of EGJ pressure morphology facilitating analysis of the EGJ constituents responsible for its function as a reflux barrier making it a promising tool in the study of GERD pathophysiology. The enhanced axial and radial recording resolution of the device should facilitate further studies to explore perturbations in the physiological constituents of EGJ pressure in health and disease. © 2013 John Wiley & Sons Ltd.

  7. Controllable 3D architectures of aligned carbon nanotube arrays by multi-step processes

    Science.gov (United States)

    Huang, Shaoming

    2003-06-01

    An effective way to fabricate large area three-dimensional (3D) aligned CNTs pattern based on pyrolysis of iron(II) phthalocyanine (FePc) by two-step processes is reported. The controllable generation of different lengths and selective growth of the aligned CNT arrays on metal-patterned (e.g., Ag and Au) substrate are the bases for generating such 3D aligned CNTs architectures. By controlling experimental conditions 3D aligned CNT arrays with different lengths/densities and morphologies/structures as well as multi-layered architectures can be fabricated in large scale by multi-step pyrolysis of FePc. These 3D architectures could have interesting properties and be applied for developing novel nanotube-based devices.

  8. Critical-path-first based allocation of real-time streaming applications on 2D mesh-type multi-cores.

    NARCIS (Netherlands)

    Ali, Hazem; Pinho, Luis Miguel; Akesson, K.B.

    2013-01-01

    Designing cost-efficient multi-core real-time systems requires efficient techniques to allocate applications to cores while satisfying their timing constraints. However, existing approaches typically allocate using a First-Fit algorithm, which does not consider the execution time and potential

  9. System Configuration and Operation Plan of Hayabusa2 DCAM3-D Camera System for Scientific Observation During SCI Impact Experiment

    Science.gov (United States)

    Ogawa, Kazunori; Shirai, Kei; Sawada, Hirotaka; Arakawa, Masahiko; Honda, Rie; Wada, Koji; Ishibashi, Ko; Iijima, Yu-ichi; Sakatani, Naoya; Nakazawa, Satoru; Hayakawa, Hajime

    2017-07-01

    An artificial impact experiment is scheduled for 2018-2019 in which an impactor will collide with asteroid 162137 Ryugu (1999 JU3) during the asteroid rendezvous phase of the Hayabusa2 spacecraft. The small carry-on impactor (SCI) will shoot a 2-kg projectile at 2 km/s to create a crater 1-10 m in diameter with an expected subsequent ejecta curtain of a 100-m scale on an ideal sandy surface. A miniaturized deployable camera (DCAM3) unit will separate from the spacecraft at about 1 km from impact, and simultaneously conduct optical observations of the experiment. We designed and developed a camera system (DCAM3-D) in the DCAM3, specialized for scientific observations of impact phenomenon, in order to clarify the subsurface structure, construct theories of impact applicable in a microgravity environment, and identify the impact point on the asteroid. The DCAM3-D system consists of a miniaturized camera with a wide-angle and high-focusing performance, high-speed radio communication devices, and control units with large data storage on both the DCAM3 unit and the spacecraft. These components were successfully developed under severe constraints of size, mass and power, and the whole DCAM3-D system has passed all tests verifying functions, performance, and environmental tolerance. Results indicated sufficient potential to conduct the scientific observations during the SCI impact experiment. An operation plan was carefully considered along with the configuration and a time schedule of the impact experiment, and pre-programed into the control unit before the launch. In this paper, we describe details of the system design concept, specifications, and the operating plan of the DCAM3-D system, focusing on the feasibility of scientific observations.

  10. OBLIQUE MULTI-CAMERA SYSTEMS – ORIENTATION AND DENSE MATCHING ISSUES

    Directory of Open Access Journals (Sweden)

    E. Rupnik

    2014-03-01

    Full Text Available The use of oblique imagery has become a standard for many civil and mapping applications, thanks to the development of airborne digital multi-camera systems, as proposed by many companies (Blomoblique, IGI, Leica, Midas, Pictometry, Vexcel/Microsoft, VisionMap, etc.. The indisputable virtue of oblique photography lies in its simplicity of interpretation and understanding for inexperienced users allowing their use of oblique images in very different applications, such as building detection and reconstruction, building structural damage classification, road land updating and administration services, etc. The paper reports an overview of the actual oblique commercial systems and presents a workflow for the automated orientation and dense matching of large image blocks. Perspectives, potentialities, pitfalls and suggestions for achieving satisfactory results are given. Tests performed on two datasets acquired with two multi-camera systems over urban areas are also reported.

  11. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    Science.gov (United States)

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  12. On the Feasibility of Real-Time 3D Hand Tracking using Edge GPGPU Acceleration

    DEFF Research Database (Denmark)

    Qammaz, A.; Kosta, S.; Kyriazis, N.

    2018-01-01

    This paper presents the case study of a non-intrusive porting of a monolithic C++ library for real-time 3D hand tracking, to the domain of edge-based computation. Towards a proof of concept, the case study considers a pair of workstations, a computationally powerful and a computationally weak one...

  13. Next generation multi-material 3D food printer concept

    NARCIS (Netherlands)

    Klomp, D.J.; Anderson, P.D.

    2017-01-01

    3D food printing is a new rapidly developing technology capable of creating food structures that are impossible to create with normal processing techniques. Challenges in this field are creating texture and multi-material food products. To address these challenges a next generation food printer will

  14. Measuring Performance of Soft Real-Time Tasks on Multi-core Systems

    OpenAIRE

    Rafiq, Salman

    2011-01-01

    Multi-core platforms are well established, and they are slowly moving into the area of embedded and real-time systems. Nowadays to take advantage of multi-core systems in terms of throughput, soft real-time applications are run together with general purpose applications under an operating system such as Linux. But due to shared hardware resources in multi-core architectures, it is likely that these applications will interfere and compete with each other. This can cause slower response times f...

  15. First demonstration of real-time gamma imaging by using a handheld Compton camera for particle therapy

    Energy Technology Data Exchange (ETDEWEB)

    Taya, T., E-mail: taka48138@ruri.waseda.jp [Research Institute for Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku, Tokyo 169-8555 (Japan); Kataoka, J.; Kishimoto, A.; Iwamoto, Y.; Koide, A. [Research Institute for Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku, Tokyo 169-8555 (Japan); Nishio, T. [Graduate School of Biomedical and Health Science, Hiroshima University, 1-2-3, Kasumi, Minami-ku, Hiroshima-shi, Hiroshima (Japan); Kabuki, S. [School of Medicine, Tokai University, 143 Shimokasuya, Isehara-shi, Kanagawa (Japan); Inaniwa, T. [National Institute of Radiological Sciences, 4-9-1 Anagawa, Inage-ku, Chiba-shi, Chiba (Japan)

    2016-09-21

    The use of real-time gamma imaging for cancer treatment in particle therapy is expected to improve the accuracy of the treatment beam delivery. In this study, we demonstrated the imaging of gamma rays generated by the nuclear interactions during proton irradiation, using a handheld Compton camera (14 cm×15 cm×16 cm, 2.5 kg) based on scintillation detectors. The angular resolution of this Compton camera is ∼8° at full width at half maximum (FWHM) for a {sup 137}Cs source. We measured the energy spectra of the gamma rays using a LaBr{sub 3}(Ce) scintillator and photomultiplier tube, and using the handheld Compton camera, performed image reconstruction when using a 70 MeV proton beam to irradiate a water, Ca(OH){sub 2}, and polymethyl methacrylate (PMMA) phantom. In the energy spectra of all three phantoms, we found an obvious peak at 511 keV, which was derived from annihilation gamma rays, and in the energy spectrum of the PMMA phantom, we found another peak at 718 keV, which contains some of the prompt gamma rays produced from {sup 10}B. Therefore, we evaluated the peak positions of the projection from the reconstructed images of the PMMA phantom. The differences between the peak positions and the Bragg peak position calculated using simulation are 7 mm±2 mm and 3 mm±8 mm, respectively. Although we could quickly acquire online gamma imaging of both of the energy ranges during proton irradiation, we cannot arrive at a clear conclusion that prompt gamma rays sufficiently trace the Bragg peak from these results because of the uncertainty given by the spatial resolution of the Compton camera. We will develop a high-resolution Compton camera in the near future for further study. - Highlights: • Gamma imaging during proton irradiation by a handheld Compton camera is demonstrated. • We were able to acquire the online gamma-ray images quickly. • We are developing a high resolution Compton camera for range verification.

  16. 3D electromagnetic theory of ICRF multi PORT multi loop antenna

    International Nuclear Information System (INIS)

    Vdovin, V.L.; Kamenskij, I.V.

    1997-01-01

    In this report the theory of three dimensional antenna in Ion Cyclotron Resonance Frequency (ICRF) is developed for a plasma with circular magnetic surfaces. The multi loop antenna is located in ITER several ports. Circular plasma and antenna geometry provides new important tools to account for: 1) right loading antenna impedance matrix calculation urgently needed for a matching of RF generator with an antenna; 2) right calculation of an antenna toroidal and poloidal excited spectra because the DIFFRACTION, refraction and REFLECTION effects for the Fast Waves (FW) are in FIRST time are included self consistently in 3D ICRF antenna - plasma treatment; 3) right calculation of RF power deposition profiles because self consistently found 3D antenna - plasma FW excited spectra in non slab plasma model are important ones in a weakly dissipated plasma for Fast Waves (even for ITER parameters). (J.P.N.)

  17. Computational imaging with multi-camera time-of-flight systems

    KAUST Repository

    Shrestha, Shikhar

    2016-07-11

    Depth cameras are a ubiquitous technology used in a wide range of applications, including robotic and machine vision, human computer interaction, autonomous vehicles as well as augmented and virtual reality. In this paper, we explore the design and applications of phased multi-camera time-of-flight (ToF) systems. We develop a reproducible hardware system that allows for the exposure times and waveforms of up to three cameras to be synchronized. Using this system, we analyze waveform interference between multiple light sources in ToF applications and propose simple solutions to this problem. Building on the concept of orthogonal frequency design, we demonstrate state-of-the-art results for instantaneous radial velocity capture via Doppler time-of-flight imaging and we explore new directions for optically probing global illumination, for example by de-scattering dynamic scenes and by non-line-of-sight motion detection via frequency gating. © 2016 ACM.

  18. A 3D technique for simulation of irregular electron treatment fields using a digital camera

    International Nuclear Information System (INIS)

    Bassalow, Roustem; Sidhu, Narinder P.

    2003-01-01

    Cerrobend inserts, which define electron field apertures, are manufactured at our institution using perspex templates. Contours are reproduced manually on these templates at the simulator from the field outlines drawn on the skin or mask of a patient. A previously reported technique for simulation of electron treatment fields uses a digital camera to eliminate the need for such templates. However, avoidance of the image distortions introduced by non-flat surfaces on which the electron field outlines were drawn could only be achieved by limiting the application of this technique to surfaces which were flat or near flat. We present a technique that employs a digital camera and allows simulation of electron treatment fields contoured on an anatomical surface of an arbitrary three-dimensional (3D) shape, such as that of the neck, extremities, face, or breast. The procedure is fast, accurate, and easy to perform

  19. Real-time visual communication to aid disaster recovery in a multi-segment hybrid wireless networking system

    Science.gov (United States)

    Al Hadhrami, Tawfik; Wang, Qi; Grecos, Christos

    2012-06-01

    When natural disasters or other large-scale incidents occur, obtaining accurate and timely information on the developing situation is vital to effective disaster recovery operations. High-quality video streams and high-resolution images, if available in real time, would provide an invaluable source of current situation reports to the incident management team. Meanwhile, a disaster often causes significant damage to the communications infrastructure. Therefore, another essential requirement for disaster management is the ability to rapidly deploy a flexible incident area communication network. Such a network would facilitate the transmission of real-time video streams and still images from the disrupted area to remote command and control locations. In this paper, a comprehensive end-to-end video/image transmission system between an incident area and a remote control centre is proposed and implemented, and its performance is experimentally investigated. In this study a hybrid multi-segment communication network is designed that seamlessly integrates terrestrial wireless mesh networks (WMNs), distributed wireless visual sensor networks, an airborne platform with video camera balloons, and a Digital Video Broadcasting- Satellite (DVB-S) system. By carefully integrating all of these rapidly deployable, interworking and collaborative networking technologies, we can fully exploit the joint benefits provided by WMNs, WSNs, balloon camera networks and DVB-S for real-time video streaming and image delivery in emergency situations among the disaster hit area, the remote control centre and the rescue teams in the field. The whole proposed system is implemented in a proven simulator. Through extensive simulations, the real-time visual communication performance of this integrated system has been numerically evaluated, towards a more in-depth understanding in supporting high-quality visual communications in such a demanding context.

  20. 3D-SURFER 2.0: web platform for real-time search and characterization of protein surfaces.

    Science.gov (United States)

    Xiong, Yi; Esquivel-Rodriguez, Juan; Sael, Lee; Kihara, Daisuke

    2014-01-01

    The increasing number of uncharacterized protein structures necessitates the development of computational approaches for function annotation using the protein tertiary structures. Protein structure database search is the basis of any structure-based functional elucidation of proteins. 3D-SURFER is a web platform for real-time protein surface comparison of a given protein structure against the entire PDB using 3D Zernike descriptors. It can smoothly navigate the protein structure space in real-time from one query structure to another. A major new feature of Release 2.0 is the ability to compare the protein surface of a single chain, a single domain, or a single complex against databases of protein chains, domains, complexes, or a combination of all three in the latest PDB. Additionally, two types of protein structures can now be compared: all-atom-surface and backbone-atom-surface. The server can also accept a batch job for a large number of database searches. Pockets in protein surfaces can be identified by VisGrid and LIGSITE (csc) . The server is available at http://kiharalab.org/3d-surfer/.

  1. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    Science.gov (United States)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  2. 3D exploitation of large urban photo archives

    Science.gov (United States)

    Cho, Peter; Snavely, Noah; Anderson, Ross

    2010-04-01

    Recent work in computer vision has demonstrated the potential to automatically recover camera and scene geometry from large collections of uncooperatively-collected photos. At the same time, aerial ladar and Geographic Information System (GIS) data are becoming more readily accessible. In this paper, we present a system for fusing these data sources in order to transfer 3D and GIS information into outdoor urban imagery. Applying this system to 1000+ pictures shot of the lower Manhattan skyline and the Statue of Liberty, we present two proof-of-concept examples of geometry-based photo enhancement which are difficult to perform via conventional image processing: feature annotation and image-based querying. In these examples, high-level knowledge projects from 3D world-space into georegistered 2D image planes and/or propagates between different photos. Such automatic capabilities lay the groundwork for future real-time labeling of imagery shot in complex city environments by mobile smart phones.

  3. D-SPECT, a semiconductor camera: Technical aspects and clinical applications

    International Nuclear Information System (INIS)

    Merlin, C.; Bertrand, S.; Kelly, A.; Veyre, A.; Mestas, D.; Cachin, F.; Motreff, P.; Levesque, S.; Cachin, F.; Askienazy, S.

    2010-01-01

    Clinical practice in nuclear medicine has largely changed in the last decade, particularly with the arrival of PET/CT and SPECT/CT. New semiconductor cameras could represent the next evolution in our nuclear medicine practice. Due to the resolution and sensitivity improvement, this technology authorizes fast speed acquisitions, high contrast and resolution images performed with low activity injection. The dedicated cardiology D-SPECT camera (Spectrum Dynamics, Israel) is based on semiconductor technology and provides an original system for collimation and images reconstruction. We describe here our clinical experience in using the D-SPECT with a preliminary study comparing D-D.P.E.C.T. and conventional gamma camera. (authors)

  4. Parallel Computational Intelligence-Based Multi-Camera Surveillance System

    OpenAIRE

    Orts-Escolano, Sergio; Garcia-Rodriguez, Jose; Morell, Vicente; Cazorla, Miguel; Azorin-Lopez, Jorge; García-Chamizo, Juan Manuel

    2014-01-01

    In this work, we present a multi-camera surveillance system based on the use of self-organizing neural networks to represent events on video. The system processes several tasks in parallel using GPUs (graphic processor units). It addresses multiple vision tasks at various levels, such as segmentation, representation or characterization, analysis and monitoring of the movement. These features allow the construction of a robust representation of the environment and interpret the behavior of mob...

  5. Three-dimensional (3D) real-time conformal brachytherapy - a novel solution for prostate cancer treatment Part I. Rationale and method

    International Nuclear Information System (INIS)

    Fijalkowski, M.; Bialas, B.; Maciejewski, B.; Bystrzycka, J.; Slosarek, K.

    2005-01-01

    Recently, the system for conformal real-time high-dose-rate brachytherapy has been developed and dedicated in general for the treatment of prostate cancer. The aim of this paper is to present the 3D-conformal real-time brachytherapy technique introduced to clinical practice at the Institute of Oncology in Gliwice. Equipment and technique of 3D-conformal real time brachytherapy (3D-CBRT) is presented in detail and compared with conventional high-dose-rate brachytherapy. Step-by-step procedures of treatment planning are described, including own modifications. The 3D-CBRT offers the following advantages: (1) on-line continuous visualization of the prostate and acquisition of the series of NS images during the entire procedure of planning and treatment; (2) high precision of definition and contouring the target volume and the healthy organs at risk (urethra, rectum, bladder) based on 3D transrectal continuous ultrasound images; (3) interactive on-line dose optimization with real-time corrections of the dose-volume histograms (DVHs) till optimal dose distribution is achieved; (4) possibility to overcome internal prostate motion and set-up inaccuracies by stable positioning of the prostate with needles fixed to the template; (5) significant shortening of overall treatment time; (6) cost reduction - the treatment can be provided as an outpatient procedure. The 3D- real time CBRT can be advertised as an ideal conformal boost dose technique integrated or interdigitated with pelvic conformal external beam radiotherapy or as a monotherapy for prostate cancer. (author)

  6. 4-mm-diameter three-dimensional imaging endoscope with steerable camera for minimally invasive surgery (3-D-MARVEL).

    Science.gov (United States)

    Bae, Sam Y; Korniski, Ronald J; Shearn, Michael; Manohara, Harish M; Shahinian, Hrayr

    2017-01-01

    High-resolution three-dimensional (3-D) imaging (stereo imaging) by endoscopes in minimally invasive surgery, especially in space-constrained applications such as brain surgery, is one of the most desired capabilities. Such capability exists at larger than 4-mm overall diameters. We report the development of a stereo imaging endoscope of 4-mm maximum diameter, called Multiangle, Rear-Viewing Endoscopic Tool (MARVEL) that uses a single-lens system with complementary multibandpass filter (CMBF) technology to achieve 3-D imaging. In addition, the system is endowed with the capability to pan from side-to-side over an angle of [Formula: see text], which is another unique aspect of MARVEL for such a class of endoscopes. The design and construction of a single-lens, CMBF aperture camera with integrated illumination to generate 3-D images, and the actuation mechanism built into it is summarized.

  7. 3D Convolutional Neural Networks for Crop Classification with Multi-Temporal Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Shunping Ji

    2018-01-01

    Full Text Available This study describes a novel three-dimensional (3D convolutional neural networks (CNN based method that automatically classifies crops from spatio-temporal remote sensing images. First, 3D kernel is designed according to the structure of multi-spectral multi-temporal remote sensing data. Secondly, the 3D CNN framework with fine-tuned parameters is designed for training 3D crop samples and learning spatio-temporal discriminative representations, with the full crop growth cycles being preserved. In addition, we introduce an active learning strategy to the CNN model to improve labelling accuracy up to a required threshold with the most efficiency. Finally, experiments are carried out to test the advantage of the 3D CNN, in comparison to the two-dimensional (2D CNN and other conventional methods. Our experiments show that the 3D CNN is especially suitable in characterizing the dynamics of crop growth and outperformed the other mainstream methods.

  8. The Value of 3D Printing Models of Left Atrial Appendage Using Real-Time 3D Transesophageal Echocardiographic Data in Left Atrial Appendage Occlusion: Applications toward an Era of Truly Personalized Medicine.

    Science.gov (United States)

    Liu, Peng; Liu, Rijing; Zhang, Yan; Liu, Yingfeng; Tang, Xiaoming; Cheng, Yanzhen

    The objective of this study was to assess the clinical feasibility of generating 3D printing models of left atrial appendage (LAA) using real-time 3D transesophageal echocardiogram (TEE) data for preoperative reference of LAA occlusion. Percutaneous LAA occlusion can effectively prevent patients with atrial fibrillation from stroke. However, the anatomical structure of LAA is so complicated that adequate information of its structure is essential for successful LAA occlusion. Emerging 3D printing technology has the demonstrated potential to structure more accurately than conventional imaging modalities by creating tangible patient-specific models. Typically, 3D printing data sets are acquired from CT and MRI, which may involve intravenous contrast, sedation, and ionizing radiation. It has been reported that 3D models of LAA were successfully created by the data acquired from CT. However, 3D printing of the LAA using real-time 3D TEE data has not yet been explored. Acquisition of 3D transesophageal echocardiographic data from 8 patients with atrial fibrillation was performed using the Philips EPIQ7 ultrasound system. Raw echocardiographic image data were opened in Philips QLAB and converted to 'Cartesian DICOM' format and imported into Mimics® software to create 3D models of LAA, which were printed using a rubber-like material. The printed 3D models were then used for preoperative reference and procedural simulation in LAA occlusion. We successfully printed LAAs of 8 patients. Each LAA costs approximately CNY 800-1,000 and the total process takes 16-17 h. Seven of the 8 Watchman devices predicted by preprocedural 2D TEE images were of the same sizes as those placed in the real operation. Interestingly, 3D printing models were highly reflective of the shape and size of LAAs, and all device sizes predicted by the 3D printing model were fully consistent with those placed in the real operation. Also, the 3D printed model could predict operating difficulty and the

  9. Integration of multispectral face recognition and multi-PTZ camera automated surveillance for security applications

    Science.gov (United States)

    Chen, Chung-Hao; Yao, Yi; Chang, Hong; Koschan, Andreas; Abidi, Mongi

    2013-06-01

    Due to increasing security concerns, a complete security system should consist of two major components, a computer-based face-recognition system and a real-time automated video surveillance system. A computerbased face-recognition system can be used in gate access control for identity authentication. In recent studies, multispectral imaging and fusion of multispectral narrow-band images in the visible spectrum have been employed and proven to enhance the recognition performance over conventional broad-band images, especially when the illumination changes. Thus, we present an automated method that specifies the optimal spectral ranges under the given illumination. Experimental results verify the consistent performance of our algorithm via the observation that an identical set of spectral band images is selected under all tested conditions. Our discovery can be practically used for a new customized sensor design associated with given illuminations for an improved face recognition performance over conventional broad-band images. In addition, once a person is authorized to enter a restricted area, we still need to continuously monitor his/her activities for the sake of security. Because pantilt-zoom (PTZ) cameras are capable of covering a panoramic area and maintaining high resolution imagery for real-time behavior understanding, researches in automated surveillance systems with multiple PTZ cameras have become increasingly important. Most existing algorithms require the prior knowledge of intrinsic parameters of the PTZ camera to infer the relative positioning and orientation among multiple PTZ cameras. To overcome this limitation, we propose a novel mapping algorithm that derives the relative positioning and orientation between two PTZ cameras based on a unified polynomial model. This reduces the dependence on the knowledge of intrinsic parameters of PTZ camera and relative positions. Experimental results demonstrate that our proposed algorithm presents substantially

  10. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    International Nuclear Information System (INIS)

    Ravindran, V. R.; Sreelakshmi, C.; Vibin

    2008-01-01

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  11. A Bayesian approach to real-time 3D tumor localization via monoscopic x-ray imaging during treatment delivery

    International Nuclear Information System (INIS)

    Li, Ruijiang; Fahimian, Benjamin P.; Xing, Lei

    2011-01-01

    Purpose: Monoscopic x-ray imaging with on-board kV devices is an attractive approach for real-time image guidance in modern radiation therapy such as VMAT or IMRT, but it falls short in providing reliable information along the direction of imaging x-ray. By effectively taking consideration of projection data at prior times and/or angles through a Bayesian formalism, the authors develop an algorithm for real-time and full 3D tumor localization with a single x-ray imager during treatment delivery. Methods: First, a prior probability density function is constructed using the 2D tumor locations on the projection images acquired during patient setup. Whenever an x-ray image is acquired during the treatment delivery, the corresponding 2D tumor location on the imager is used to update the likelihood function. The unresolved third dimension is obtained by maximizing the posterior probability distribution. The algorithm can also be used in a retrospective fashion when all the projection images during the treatment delivery are used for 3D localization purposes. The algorithm does not involve complex optimization of any model parameter and therefore can be used in a ''plug-and-play'' fashion. The authors validated the algorithm using (1) simulated 3D linear and elliptic motion and (2) 3D tumor motion trajectories of a lung and a pancreas patient reproduced by a physical phantom. Continuous kV images were acquired over a full gantry rotation with the Varian TrueBeam on-board imaging system. Three scenarios were considered: fluoroscopic setup, cone beam CT setup, and retrospective analysis. Results: For the simulation study, the RMS 3D localization error is 1.2 and 2.4 mm for the linear and elliptic motions, respectively. For the phantom experiments, the 3D localization error is < 1 mm on average and < 1.5 mm at 95th percentile in the lung and pancreas cases for all three scenarios. The difference in 3D localization error for different scenarios is small and is not

  12. Small SWAP 3D imaging flash ladar for small tactical unmanned air systems

    Science.gov (United States)

    Bird, Alan; Anderson, Scott A.; Wojcik, Michael; Budge, Scott E.

    2015-05-01

    The Space Dynamics Laboratory (SDL), working with Naval Research Laboratory (NRL) and industry leaders Advanced Scientific Concepts (ASC) and Hood Technology Corporation, has developed a small SWAP (size, weight, and power) 3D imaging flash ladar (LAser Detection And Ranging) sensor system concept design for small tactical unmanned air systems (STUAS). The design utilizes an ASC 3D flash ladar camera and laser in a Hood Technology gyro-stabilized gimbal system. The design is an autonomous, intelligent, geo-aware sensor system that supplies real-time 3D terrain and target images. Flash ladar and visible camera data are processed at the sensor using a custom digitizer/frame grabber with compression. Mounted in the aft housing are power, controls, processing computers, and GPS/INS. The onboard processor controls pointing and handles image data, detection algorithms and queuing. The small SWAP 3D imaging flash ladar sensor system generates georeferenced terrain and target images with a low probability of false return and system SWAP estimate of system is modeled using LadarSIM, a MATLAB® and Simulink®- based ladar system simulator designed and developed by the Center for Advanced Imaging Ladar (CAIL) at Utah State University. We will present the concept design and modeled performance predictions.

  13. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets.

    Science.gov (United States)

    Scharfe, Michael; Pielot, Rainer; Schreiber, Falk

    2010-01-11

    Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE), a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  14. Camera selection for real-time in vivo radiation treatment verification systems using Cherenkov imaging.

    Science.gov (United States)

    Andreozzi, Jacqueline M; Zhang, Rongxiao; Glaser, Adam K; Jarvis, Lesley A; Pogue, Brian W; Gladstone, David J

    2015-02-01

    To identify achievable camera performance and hardware needs in a clinical Cherenkov imaging system for real-time, in vivo monitoring of the surface beam profile on patients, as novel visual information, documentation, and possible treatment verification for clinicians. Complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), intensified charge-coupled device (ICCD), and electron multiplying-intensified charge coupled device (EM-ICCD) cameras were investigated to determine Cherenkov imaging performance in a clinical radiotherapy setting, with one emphasis on the maximum supportable frame rate. Where possible, the image intensifier was synchronized using a pulse signal from the Linac in order to image with room lighting conditions comparable to patient treatment scenarios. A solid water phantom irradiated with a 6 MV photon beam was imaged by the cameras to evaluate the maximum frame rate for adequate Cherenkov detection. Adequate detection was defined as an average electron count in the background-subtracted Cherenkov image region of interest in excess of 0.5% (327 counts) of the 16-bit maximum electron count value. Additionally, an ICCD and an EM-ICCD were each used clinically to image two patients undergoing whole-breast radiotherapy to compare clinical advantages and limitations of each system. Intensifier-coupled cameras were required for imaging Cherenkov emission on the phantom surface with ambient room lighting; standalone CMOS and CCD cameras were not viable. The EM-ICCD was able to collect images from a single Linac pulse delivering less than 0.05 cGy of dose at 30 frames/s (fps) and pixel resolution of 512 × 512, compared to an ICCD which was limited to 4.7 fps at 1024 × 1024 resolution. An intensifier with higher quantum efficiency at the entrance photocathode in the red wavelengths [30% quantum efficiency (QE) vs previous 19%] promises at least 8.6 fps at a resolution of 1024 × 1024 and lower monetary cost than the EM-ICCD. The

  15. Performance analysis for automated gait extraction and recognition in multi-camera surveillance

    OpenAIRE

    Goffredo, Michela; Bouchrika, Imed; Carter, John N.; Nixon, Mark S.

    2010-01-01

    Many studies have confirmed that gait analysis can be used as a new biometrics. In this research, gait analysis is deployed for people identification in multi-camera surveillance scenarios. We present a new method for viewpoint independent markerless gait analysis that does not require camera calibration and works with a wide range of walking directions. These properties make the proposed method particularly suitable for gait identification in real surveillance scenarios where people and thei...

  16. Optimal transcostal high-intensity focused ultrasound with combined real-time 3D movement tracking and correction

    International Nuclear Information System (INIS)

    Marquet, F; Aubry, J F; Pernot, M; Fink, M; Tanter, M

    2011-01-01

    Recent studies have demonstrated the feasibility of transcostal high intensity focused ultrasound (HIFU) treatment in liver. However, two factors limit thermal necrosis of the liver through the ribs: the energy deposition at focus is decreased by the respiratory movement of the liver and the energy deposition on the skin is increased by the presence of highly absorbing bone structures. Ex vivo ablations were conducted to validate the feasibility of a transcostal real-time 3D movement tracking and correction mode. Experiments were conducted through a chest phantom made of three human ribs immersed in water and were placed in front of a 300 element array working at 1 MHz. A binarized apodization law introduced recently in order to spare the rib cage during treatment has been extended here with real-time electronic steering of the beam. Thermal simulations have been conducted to determine the steering limits. In vivo 3D-movement detection was performed on pigs using an ultrasonic sequence. The maximum error on the transcostal motion detection was measured to be 0.09 ± 0.097 mm on the anterior–posterior axis. Finally, a complete sequence was developed combining real-time 3D transcostal movement correction and spiral trajectory of the HIFU beam, allowing the system to treat larger areas with optimized efficiency. Lesions as large as 1 cm in diameter have been produced at focus in excised liver, whereas no necroses could be obtained with the same emitted power without correcting the movement of the tissue sample.

  17. A complete system for 3D reconstruction of roots for phenotypic analysis.

    Science.gov (United States)

    Kumar, Pankaj; Cai, Jinhai; Miklavcic, Stanley J

    2015-01-01

    Here we present a complete system for 3D reconstruction of roots grown in a transparent gel medium or washed and suspended in water. The system is capable of being fully automated as it is self calibrating. The system starts with detection of root tips in root images from an image sequence generated by a turntable motion. Root tips are detected using the statistics of Zernike moments on image patches centred on high curvature points on root boundary and Bayes classification rule. The detected root tips are tracked in the image sequence using a multi-target tracking algorithm. Conics are fitted to the root tip trajectories using a novel ellipse fitting algorithm which weighs the data points by its eccentricity. The conics projected from the circular trajectory have a complex conjugate intersection which are image of the circular points. Circular points constraint the image of the absolute conics which are directly related to the internal parameters of the camera. The pose of the camera is computed from the image of the rotation axis and the horizon. The silhouettes of the roots and camera parameters are used to reconstruction the 3D voxel model of the roots. We show the results of real 3D reconstruction of roots which are detailed and realistic for phenotypic analysis.

  18. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    Science.gov (United States)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  19. Real-Time Multi-Target Localization from Unmanned Aerial Vehicles

    Directory of Open Access Journals (Sweden)

    Xuan Wang

    2016-12-01

    Full Text Available In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1 the real-time zoom lens distortion correction method; (2 a recursive least squares (RLS filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions.

  20. GENERATING ACCURATE 3D MODELS OF ARCHITECTURAL HERITAGE STRUCTURES USING LOW-COST CAMERA AND OPEN SOURCE ALGORITHMS

    Directory of Open Access Journals (Sweden)

    M. Zacharek

    2017-05-01

    Full Text Available These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters, but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  1. 3D video

    CERN Document Server

    Lucas, Laurent; Loscos, Céline

    2013-01-01

    While 3D vision has existed for many years, the use of 3D cameras and video-based modeling by the film industry has induced an explosion of interest for 3D acquisition technology, 3D content and 3D displays. As such, 3D video has become one of the new technology trends of this century.The chapters in this book cover a large spectrum of areas connected to 3D video, which are presented both theoretically and technologically, while taking into account both physiological and perceptual aspects. Stepping away from traditional 3D vision, the authors, all currently involved in these areas, provide th

  2. 2.5D Multi-View Gait Recognition Based on Point Cloud Registration

    Science.gov (United States)

    Tang, Jin; Luo, Jian; Tjahjadi, Tardi; Gao, Yan

    2014-01-01

    This paper presents a method for modeling a 2.5-dimensional (2.5D) human body and extracting the gait features for identifying the human subject. To achieve view-invariant gait recognition, a multi-view synthesizing method based on point cloud registration (MVSM) to generate multi-view training galleries is proposed. The concept of a density and curvature-based Color Gait Curvature Image is introduced to map 2.5D data onto a 2D space to enable data dimension reduction by discrete cosine transform and 2D principle component analysis. Gait recognition is achieved via a 2.5D view-invariant gait recognition method based on point cloud registration. Experimental results on the in-house database captured by a Microsoft Kinect camera show a significant performance gain when using MVSM. PMID:24686727

  3. Multi-Scale Modeling of an Integrated 3D Braided Composite with Applications to Helicopter Arm

    Science.gov (United States)

    Zhang, Diantang; Chen, Li; Sun, Ying; Zhang, Yifan; Qian, Kun

    2017-10-01

    A study is conducted with the aim of developing multi-scale analytical method for designing the composite helicopter arm with three-dimensional (3D) five-directional braided structure. Based on the analysis of 3D braided microstructure, the multi-scale finite element modeling is developed. Finite element analysis on the load capacity of 3D five-directional braided composites helicopter arm is carried out using the software ABAQUS/Standard. The influences of the braiding angle and loading condition on the stress and strain distribution of the helicopter arm are simulated. The results show that the proposed multi-scale method is capable of accurately predicting the mechanical properties of 3D braided composites, validated by the comparison the stress-strain curves of meso-scale RVCs. Furthermore, it is found that the braiding angle is an important factor affecting the mechanical properties of 3D five-directional braided composite helicopter arm. Based on the optimized structure parameters, the nearly net-shaped composite helicopter arm is fabricated using a novel resin transfer mould (RTM) process.

  4. A method of multi-view intraoral 3D measurement

    Science.gov (United States)

    Zhao, Huijie; Wang, Zhen; Jiang, Hongzhi; Xu, Yang; Lv, Peijun; Sun, Yunchun

    2015-02-01

    In dental restoration, its important to achieve a high-accuracy digital impression. Most of the existing intraoral measurement systems can only measure the tooth from a single view. Therfore - if we are wilng to acquire the whole data of a tooth, the scans of the tooth from multi-direction ad the data stitching based on the features of the surface are needed, which increases the measurement duration and influence the measurement accuracy. In this paper, we introduce a fringe-projection based on multi-view intraoral measurement system. It can acquire 3D data of the occlusal surface, the buccal surface and the lingual surface of a tooth synchronously, by using a senor with three mirrors, which aim at the three surfaces respectively and thus expand the measuring area. The constant relationship of the three mirrors is calibrated before measurement and can help stitch the data clouds acquired through different mirrors accurately. Therefore the system can obtain the 3D data of a tooth without the need to measure it from different directions for many times. Experiments proved the availability and reliability of this miniaturized measurement system.

  5. Fast multi-core based multimodal registration of 2D cross-sections and 3D datasets

    Directory of Open Access Journals (Sweden)

    Pielot Rainer

    2010-01-01

    Full Text Available Abstract Background Solving bioinformatics tasks often requires extensive computational power. Recent trends in processor architecture combine multiple cores into a single chip to improve overall performance. The Cell Broadband Engine (CBE, a heterogeneous multi-core processor, provides power-efficient and cost-effective high-performance computing. One application area is image analysis and visualisation, in particular registration of 2D cross-sections into 3D image datasets. Such techniques can be used to put different image modalities into spatial correspondence, for example, 2D images of histological cuts into morphological 3D frameworks. Results We evaluate the CBE-driven PlayStation 3 as a high performance, cost-effective computing platform by adapting a multimodal alignment procedure to several characteristic hardware properties. The optimisations are based on partitioning, vectorisation, branch reducing and loop unrolling techniques with special attention to 32-bit multiplies and limited local storage on the computing units. We show how a typical image analysis and visualisation problem, the multimodal registration of 2D cross-sections and 3D datasets, benefits from the multi-core based implementation of the alignment algorithm. We discuss several CBE-based optimisation methods and compare our results to standard solutions. More information and the source code are available from http://cbe.ipk-gatersleben.de. Conclusions The results demonstrate that the CBE processor in a PlayStation 3 accelerates computational intensive multimodal registration, which is of great importance in biological/medical image processing. The PlayStation 3 as a low cost CBE-based platform offers an efficient option to conventional hardware to solve computational problems in image processing and bioinformatics.

  6. Euratom multi-camera optical surveillance system (EMOSS) - a digital solution

    International Nuclear Information System (INIS)

    Otto, P.; Wagner, H.G.; Taillade, B.; Pryck, C. de.

    1991-01-01

    In 1989 the Euratom Safeguards Directorate of the Commission of the European Communities drew up functional and draft technical specifications for a new fully digital multi-camera optical surveillance system. HYMATOM of Castries designed and built a prototype unit for laboratory and field tests. This paper reports and system design and first test results

  7. Illustrative visualization of 3D city models

    Science.gov (United States)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  8. Multiple Sensor Camera for Enhanced Video Capturing

    Science.gov (United States)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  9. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    Science.gov (United States)

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  10. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    Directory of Open Access Journals (Sweden)

    Jin Qi

    Full Text Available Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  11. Real-Time Large Scale 3d Reconstruction by Fusing Kinect and Imu Data

    Science.gov (United States)

    Huai, J.; Zhang, Y.; Yilmaz, A.

    2015-08-01

    Kinect-style RGB-D cameras have been used to build large scale dense 3D maps for indoor environments. These maps can serve many purposes such as robot navigation, and augmented reality. However, to generate dense 3D maps of large scale environments is still very challenging. In this paper, we present a mapping system for 3D reconstruction that fuses measurements from a Kinect and an inertial measurement unit (IMU) to estimate motion. Our major achievements include: (i) Large scale consistent 3D reconstruction is realized by volume shifting and loop closure; (ii) The coarse-to-fine iterative closest point (ICP) algorithm, the SIFT odometry, and IMU odometry are combined to robustly and precisely estimate pose. In particular, ICP runs routinely to track the Kinect motion. If ICP fails in planar areas, the SIFT odometry provides incremental motion estimate. If both ICP and the SIFT odometry fail, e.g., upon abrupt motion or inadequate features, the incremental motion is estimated by the IMU. Additionally, the IMU also observes the roll and pitch angles which can reduce long-term drift of the sensor assembly. In experiments on a consumer laptop, our system estimates motion at 8Hz on average while integrating color images to the local map and saving volumes of meshes concurrently. Moreover, it is immune to tracking failures, and has smaller drift than the state-of-the-art systems in large scale reconstruction.

  12. a Comparison among Different Optimization Levels in 3d Multi-Sensor Models. a Test Case in Emergency Context: 2016 Italian Earthquake

    Science.gov (United States)

    Chiabrando, F.; Sammartano, G.; Spanò, A.

    2017-02-01

    In sudden emergency contexts that affect urban centres and built heritage, the latest Geomatics technique solutions must enable the demands of damage documentation, risk assessment, management and data sharing as efficiently as possible, in relation to the danger condition, to the accessibility constraints of areas and to the tight deadlines needs. In recent times, Unmanned Vehicle System (UAV) equipped with cameras are more and more involved in aerial survey and reconnaissance missions, and they are behaving in a very cost-effective way in the direction of 3D documentation and preliminary damage assessment. More and more UAV equipment with low-cost sensors must become, in the future, suitable in every situation of documentation, but above all in damages and uncertainty frameworks. Rapidity in acquisition times and low-cost sensors are challenging marks, and they could be taken into consideration maybe with time spending processing. The paper will analyze and try to classify the information content in 3D aerial and terrestrial models and the importance of metric and non-metric withdrawable information that should be suitable for further uses, as the structural analysis one. The test area is an experience of Team Direct from Politecnico di Torino in centre Italy, where a strong earthquake occurred in August 2016. This study is carried out on a stand-alone damaged building in Pescara del Tronto (AP), with a multi-sensor 3D survey. The aim is to evaluate the contribution of terrestrial and aerial quick documentation by a SLAM based LiDAR and a camera equipped multirotor UAV, for a first reconnaissance inspection and modelling in terms of level of details, metric and non-metric information.

  13. Tracking algorithms for multi-hexagonal assemblies (2D and 3D)

    International Nuclear Information System (INIS)

    Prabha, Hem; Marleau, Guy; Hébert, Alain

    2014-01-01

    Highlights: • We present the method of computations of 2D and 3D fluxes in hexagonal assemblies. • Computation of fluxes requires computation of track lengths. • Equations are developed (in 2D and 3D) and are implemented in a program HX7. • The program HX7 is implemented in the NXT module of the code DRAGON. • The tracks are plotted and fluxes are compared with the EXCELT module of DRAGON. - Abstract: Background: There has been a continuous effort to design new reactors and study these reactors under different conditions. Some of these reactors have fuel pins arranged in hexagonal pitch. To study these reactors, development of computational methods and computer codes is required. For this purpose, we have developed algorithms to track two dimensional and three dimensional cluster geometries. These algorithms have been implemented in a subprogram HX7, that is implemented in the code DRAGON (Version 3.06F) to compute neutron flux distributions in these systems. Methods: Computation of the neutron flux distribution requires solution of neutron transport equation. While solving this equation, by using Carlvik’s method of collision probabilities, computation of tracks in the hexagonal geometries is required. In this paper we present equations that we have developed for the computation of tracks in two dimensional (2D) and three dimensional (3D) multi-hexagonal assemblies (with two rotational orientations). These equations have been implemented in a subprogram HX7, to compute tracks in seven hexagonal assemblies. The subprogram HX7 has been implemented in the NXT module of the DRAGON code, where tracks in the pins are computed. Results: The results of our algorithms NXT(+HX7) have been compared with the results obtained by the EXCELT module of DRAGON (Version 3.06F). Conclusions: We find that all the fluxes in 2D and fluxes in the outer pin (3D) are converging to their 3rd decimal places, in both the modules EXCELT and NXT(+HX7). For other regions 3D fluxes

  14. Real-time high resolution 3D imaging of the lyme disease spirochete adhering to and escaping from the vasculature of a living host.

    Directory of Open Access Journals (Sweden)

    Tara J Moriarty

    2008-06-01

    Full Text Available Pathogenic spirochetes are bacteria that cause a number of emerging and re-emerging diseases worldwide, including syphilis, leptospirosis, relapsing fever, and Lyme borreliosis. They navigate efficiently through dense extracellular matrix and cross the blood-brain barrier by unknown mechanisms. Due to their slender morphology, spirochetes are difficult to visualize by standard light microscopy, impeding studies of their behavior in situ. We engineered a fluorescent infectious strain of Borrelia burgdorferi, the Lyme disease pathogen, which expressed green fluorescent protein (GFP. Real-time 3D and 4D quantitative analysis of fluorescent spirochete dissemination from the microvasculature of living mice at high resolution revealed that dissemination was a multi-stage process that included transient tethering-type associations, short-term dragging interactions, and stationary adhesion. Stationary adhesions and extravasating spirochetes were most commonly observed at endothelial junctions, and translational motility of spirochetes appeared to play an integral role in transendothelial migration. To our knowledge, this is the first report of high resolution 3D and 4D visualization of dissemination of a bacterial pathogen in a living mammalian host, and provides the first direct insight into spirochete dissemination in vivo.

  15. Minimal Camera Networks for 3D Image Based Modeling of Cultural Heritage Objects

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-01-01

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue “Lamassu”. Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883–859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm. PMID:24670718

  16. Minimal camera networks for 3D image based modeling of cultural heritage objects.

    Science.gov (United States)

    Alsadik, Bashar; Gerke, Markus; Vosselman, George; Daham, Afrah; Jasim, Luma

    2014-03-25

    3D modeling of cultural heritage objects like artifacts, statues and buildings is nowadays an important tool for virtual museums, preservation and restoration. In this paper, we introduce a method to automatically design a minimal imaging network for the 3D modeling of cultural heritage objects. This becomes important for reducing the image capture time and processing when documenting large and complex sites. Moreover, such a minimal camera network design is desirable for imaging non-digitally documented artifacts in museums and other archeological sites to avoid disturbing the visitors for a long time and/or moving delicate precious objects to complete the documentation task. The developed method is tested on the Iraqi famous statue "Lamassu". Lamassu is a human-headed winged bull of over 4.25 m in height from the era of Ashurnasirpal II (883-859 BC). Close-range photogrammetry is used for the 3D modeling task where a dense ordered imaging network of 45 high resolution images were captured around Lamassu with an object sample distance of 1 mm. These images constitute a dense network and the aim of our study was to apply our method to reduce the number of images for the 3D modeling and at the same time preserve pre-defined point accuracy. Temporary control points were fixed evenly on the body of Lamassu and measured by using a total station for the external validation and scaling purpose. Two network filtering methods are implemented and three different software packages are used to investigate the efficiency of the image orientation and modeling of the statue in the filtered (reduced) image networks. Internal and external validation results prove that minimal image networks can provide highly accurate records and efficiency in terms of visualization, completeness, processing time (>60% reduction) and the final accuracy of 1 mm.

  17. AM06-25-004 3-D analysis of sports ball flight trajectory by high speed camera

    OpenAIRE

    溝田, 武人; 小倉, 聡樹; Taketo, MIZOTA; Satoki, OGURA; 福岡工大; 福岡工大:(現)(株)日立建機アルバ; FIT; FIT

    2006-01-01

    Outdoor experiments of valley ball and soccer ball free fall are conducted at Aso choyo big bridge. High speed camera captured these 3-D trajectories of ball fall process. Image processing of these pictures and Newton's equation of motion were available to clarify unsteady lift C_L and side force C_S. Strouhal number concerning of these sports balls, which are estimated by the unsteady air force, implies that the wavy motion of the balls are a kind of flutter phenomena by wake perturbation an...

  18. Automatic co-registration of 3D multi-sensor point clouds

    Science.gov (United States)

    Persad, Ravi Ancil; Armenakis, Costas

    2017-08-01

    We propose an approach for the automatic coarse alignment of 3D point clouds which have been acquired from various platforms. The method is based on 2D keypoint matching performed on height map images of the point clouds. Initially, a multi-scale wavelet keypoint detector is applied, followed by adaptive non-maxima suppression. A scale, rotation and translation-invariant descriptor is then computed for all keypoints. The descriptor is built using the log-polar mapping of Gabor filter derivatives in combination with the so-called Rapid Transform. In the final step, source and target height map keypoint correspondences are determined using a bi-directional nearest neighbour similarity check, together with a threshold-free modified-RANSAC. Experiments with urban and non-urban scenes are presented and results show scale errors ranging from 0.01 to 0.03, 3D rotation errors in the order of 0.2° to 0.3° and 3D translation errors from 0.09 m to 1.1 m.

  19. Taking it all in : special camera films in 3-D

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, L.

    2006-07-15

    Details of a 360-degree digital camera designed by Immersive Media Telemmersion were presented. The camera has been employed extensively in the United States for homeland security and intelligence-gathering purposes. In Canada, the cameras are now being used by the oil and gas industry. The camera has 11 lenses pointing in all directions and generates high resolution movies that can be analyzed frame-by-frame from every angle. Global positioning satellite data can be gathered during filming so that operators can pinpoint any location. The 11 video streams use more than 100 million pixels per second. After filming, the system displays synchronized, high-resolution video streams, capturing a full motion spherical world complete with directional sound. It can be viewed on a computer monitor, video screen, or head-mounted display. Pembina Pipeline Corporation recently used the Telemmersion system to plot a proposed pipeline route between Alberta's Athabasca region and Edmonton. It was estimated that more than $50,000 was saved by using the camera. The resulting video has been viewed by Pembina's engineering, environmental and geotechnical groups who were able to accurately note the route's river crossings. The cameras were also used to estimate timber salvage. Footage was then given to the operations group, to help staff familiarize themselves with the terrain, the proposed route's right-of-way, and the number of water crossings and access points. Oil and gas operators have also used the equipment on a recently acquired block of land to select well sites. 4 figs.

  20. Real-time registration of 3D to 2D ultrasound images for image-guided prostate biopsy.

    Science.gov (United States)

    Gillies, Derek J; Gardi, Lori; De Silva, Tharindu; Zhao, Shuang-Ren; Fenster, Aaron

    2017-09-01

    During image-guided prostate biopsy, needles are targeted at tissues that are suspicious of cancer to obtain specimen for histological examination. Unfortunately, patient motion causes targeting errors when using an MR-transrectal ultrasound (TRUS) fusion approach to augment the conventional biopsy procedure. This study aims to develop an automatic motion correction algorithm approaching the frame rate of an ultrasound system to be used in fusion-based prostate biopsy systems. Two modes of operation have been investigated for the clinical implementation of the algorithm: motion compensation using a single user initiated correction performed prior to biopsy, and real-time continuous motion compensation performed automatically as a background process. Retrospective 2D and 3D TRUS patient images acquired prior to biopsy gun firing were registered using an intensity-based algorithm utilizing normalized cross-correlation and Powell's method for optimization. 2D and 3D images were downsampled and cropped to estimate the optimal amount of image information that would perform registrations quickly and accurately. The optimal search order during optimization was also analyzed to avoid local optima in the search space. Error in the algorithm was computed using target registration errors (TREs) from manually identified homologous fiducials in a clinical patient dataset. The algorithm was evaluated for real-time performance using the two different modes of clinical implementations by way of user initiated and continuous motion compensation methods on a tissue mimicking prostate phantom. After implementation in a TRUS-guided system with an image downsampling factor of 4, the proposed approach resulted in a mean ± std TRE and computation time of 1.6 ± 0.6 mm and 57 ± 20 ms respectively. The user initiated mode performed registrations with in-plane, out-of-plane, and roll motions computation times of 108 ± 38 ms, 60 ± 23 ms, and 89 ± 27 ms, respectively, and corresponding

  1. In Situ 3D Segmentation of Individual Plant Leaves Using a RGB-D Camera for Agricultural Automation

    Directory of Open Access Journals (Sweden)

    Chunlei Xia

    2015-08-01

    Full Text Available In this paper, we present a challenging task of 3D segmentation of individual plant leaves from occlusions in the complicated natural scene. Depth data of plant leaves is introduced to improve the robustness of plant leaf segmentation. The low cost RGB-D camera is utilized to capture depth and color image in fields. Mean shift clustering is applied to segment plant leaves in depth image. Plant leaves are extracted from the natural background by examining vegetation of the candidate segments produced by mean shift. Subsequently, individual leaves are segmented from occlusions by active contour models. Automatic initialization of the active contour models is implemented by calculating the center of divergence from the gradient vector field of depth image. The proposed segmentation scheme is tested through experiments under greenhouse conditions. The overall segmentation rate is 87.97% while segmentation rates for single and occluded leaves are 92.10% and 86.67%, respectively. Approximately half of the experimental results show segmentation rates of individual leaves higher than 90%. Nevertheless, the proposed method is able to segment individual leaves from heavy occlusions.

  2. ArtifactVis2: Managing real-time archaeological data in immersive 3D environments

    KAUST Repository

    Smith, Neil

    2013-10-01

    In this paper, we present a stereoscopic research and training environment for archaeologists called ArtifactVis2. This application enables the management and visualization of diverse types of cultural datasets within a collaborative virtual 3D system. The archaeologist is fully immersed in a large-scale visualization of on-going excavations. Massive 3D datasets are seamlessly rendered in real-time with field recorded GIS data, 3D artifact scans and digital photography. Dynamic content can be visualized and cultural analytics can be performed on archaeological datasets collected through a rigorous digital archaeological methodology. The virtual collaborative environment provides a menu driven query system and the ability to annotate, markup, measure, and manipulate any of the datasets. These features enable researchers to re-experience and analyze the minute details of an archaeological site\\'s excavation. It enhances their visual capacity to recognize deep patterns and structures and perceive changes and reoccurrences. As a complement and development from previous work in the field of 3D immersive archaeological environments, ArtifactVis2 provides a GIS based immersive environment that taps directly into archaeological datasets to investigate cultural and historical issues of ancient societies and cultural heritage in ways not possible before. © 2013 IEEE.

  3. A multi-level surface rebalancing approach for efficient convergence acceleration of 3D full core multi-group fine grid nodal diffusion iterations

    International Nuclear Information System (INIS)

    Geemert, René van

    2014-01-01

    Highlights: • New type of multi-level rebalancing approach for nodal transport. • Generally improved and more mesh-independent convergence behavior. • Importance for intended regime of 3D pin-by-pin core computations. - Abstract: A new multi-level surface rebalancing (MLSR) approach has been developed, aimed at enabling an improved non-linear acceleration of nodal flux iteration convergence in 3D steady-state and transient reactor simulation. This development is meant specifically for anticipating computational needs for solving envisaged multi-group diffusion-like SP N calculations with enhanced mesh resolution (i.e. 3D multi-box up to 3D pin-by-pin grid). For the latter grid refinement regime, the previously available multi-level coarse mesh rebalancing (MLCMR) strategy has been observed to become increasingly inefficient with increasing 3D mesh resolution. Furthermore, for very fine 3D grids that feature a very fine axial mesh as well, non-convergence phenomena have been observed to emerge. In the verifications pursued up to now, these problems have been resolved by the new approach. The novelty arises from taking the interface current balance equations defined over all Cartesian box edges, instead of the nodal volume-integrated process-rate balance equation, as an appropriate restriction basis for setting up multi-level acceleration of fine grid interface current iterations. The new restriction strategy calls for the use of a newly derived set of adjoint spectral equations that are needed for computing a limited set of spectral response vectors per node. This enables a straightforward determination of group-condensed interface current spectral coupling operators that are of crucial relevance in the new rebalancing setup. Another novelty in the approach is a new variational method for computing the neutronic eigenvalue. Within this context, the latter is treated as a control parameter for driving another, newly defined and numerically more fundamental

  4. GPU acceleration towards real-time image reconstruction in 3D tomographic diffractive microscopy

    Science.gov (United States)

    Bailleul, J.; Simon, B.; Debailleul, M.; Liu, H.; Haeberlé, O.

    2012-06-01

    Phase microscopy techniques regained interest in allowing for the observation of unprepared specimens with excellent temporal resolution. Tomographic diffractive microscopy is an extension of holographic microscopy which permits 3D observations with a finer resolution than incoherent light microscopes. Specimens are imaged by a series of 2D holograms: their accumulation progressively fills the range of frequencies of the specimen in Fourier space. A 3D inverse FFT eventually provides a spatial image of the specimen. Consequently, acquisition then reconstruction are mandatory to produce an image that could prelude real-time control of the observed specimen. The MIPS Laboratory has built a tomographic diffractive microscope with an unsurpassed 130nm resolution but a low imaging speed - no less than one minute. Afterwards, a high-end PC reconstructs the 3D image in 20 seconds. We now expect an interactive system providing preview images during the acquisition for monitoring purposes. We first present a prototype implementing this solution on CPU: acquisition and reconstruction are tied in a producer-consumer scheme, sharing common data into CPU memory. Then we present a prototype dispatching some reconstruction tasks to GPU in order to take advantage of SIMDparallelization for FFT and higher bandwidth for filtering operations. The CPU scheme takes 6 seconds for a 3D image update while the GPU scheme can go down to 2 or > 1 seconds depending on the GPU class. This opens opportunities for 4D imaging of living organisms or crystallization processes. We also consider the relevance of GPU for 3D image interaction in our specific conditions.

  5. Multi-material 3D Models for Temporal Bone Surgical Simulation.

    Science.gov (United States)

    Rose, Austin S; Kimbell, Julia S; Webster, Caroline E; Harrysson, Ola L A; Formeister, Eric J; Buchman, Craig A

    2015-07-01

    A simulated, multicolor, multi-material temporal bone model can be created using 3-dimensional (3D) printing that will prove both safe and beneficial in training for actual temporal bone surgical cases. As the process of additive manufacturing, or 3D printing, has become more practical and affordable, a number of applications for the technology in the field of Otolaryngology-Head and Neck Surgery have been considered. One area of promise is temporal bone surgical simulation. Three-dimensional representations of human temporal bones were created from temporal bone computed tomography (CT) scans using biomedical image processing software. Multi-material models were then printed and dissected in a temporal bone laboratory by attending and resident otolaryngologists. A 5-point Likert scale was used to grade the models for their anatomical accuracy and suitability as a simulation of cadaveric and operative temporal bone drilling. The models produced for this study demonstrate significant anatomic detail and a likeness to human cadaver specimens for drilling and dissection. Simulated temporal bones created by this process have potential benefit in surgical training, preoperative simulation for challenging otologic cases, and the standardized testing of temporal bone surgical skills. © The Author(s) 2015.

  6. Development of modular control software for construction 3D-printer

    Science.gov (United States)

    Bazhanov, A.; Yudin, D.; Porkhalo, V.

    2018-03-01

    This article discusses the approach to developing modular software for real-time control of an industrial construction 3D printer. The proposed structure of a two-level software solution is implemented for a robotic system that moves in a Cartesian coordinate system with multi-axis interpolation. An algorithm for the formation and analysis of a path is considered to enable the most effective control of printing through dynamic programming.

  7. Synthetic biology's tall order: Reconstruction of 3D, super resolution images of single molecules in real-time

    CSIR Research Space (South Africa)

    Henriques, R

    2010-08-31

    Full Text Available -to-use reconstruction software coupled with image acquisition. Here, we present QuickPALM, an Image plugin, enabling real-time reconstruction of 3D super-resolution images during acquisition and drift correction. We illustrate its application by reconstructing Cy5...

  8. A practical approach for active camera coordination based on a fusion-driven multi-agent system

    Science.gov (United States)

    Bustamante, Alvaro Luis; Molina, José M.; Patricio, Miguel A.

    2014-04-01

    In this paper, we propose a multi-agent system architecture to manage spatially distributed active (or pan-tilt-zoom) cameras. Traditional video surveillance algorithms are of no use for active cameras, and we have to look at different approaches. Such multi-sensor surveillance systems have to be designed to solve two related problems: data fusion and coordinated sensor-task management. Generally, architectures proposed for the coordinated operation of multiple cameras are based on the centralisation of management decisions at the fusion centre. However, the existence of intelligent sensors capable of decision making brings with it the possibility of conceiving alternative decentralised architectures. This problem is approached by means of a MAS, integrating data fusion as an integral part of the architecture for distributed coordination purposes. This paper presents the MAS architecture and system agents.

  9. Introducing the depth transfer curve for 3D capture system characterization

    Science.gov (United States)

    Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas

    2011-03-01

    3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.

  10. Open 3D Projects

    Directory of Open Access Journals (Sweden)

    Felician ALECU

    2010-01-01

    Full Text Available Many professionals and 3D artists consider Blender as being the best open source solution for 3D computer graphics. The main features are related to modeling, rendering, shading, imaging, compositing, animation, physics and particles and realtime 3D/game creation.

  11. Creation of 3D Multi-Body Orthodontic Models by Using Independent Imaging Sensors

    Directory of Open Access Journals (Sweden)

    Armando Viviano Razionale

    2013-02-01

    Full Text Available In the field of dental health care, plaster models combined with 2D radiographs are widely used in clinical practice for orthodontic diagnoses. However, complex malocclusions can be better analyzed by exploiting 3D digital dental models, which allow virtual simulations and treatment planning processes. In this paper, dental data captured by independent imaging sensors are fused to create multi-body orthodontic models composed of teeth, oral soft tissues and alveolar bone structures. The methodology is based on integrating Cone-Beam Computed Tomography (CBCT and surface structured light scanning. The optical scanner is used to reconstruct tooth crowns and soft tissues (visible surfaces through the digitalization of both patients’ mouth impressions and plaster casts. These data are also used to guide the segmentation of internal dental tissues by processing CBCT data sets. The 3D individual dental tissues obtained by the optical scanner and the CBCT sensor are fused within multi-body orthodontic models without human supervisions to identify target anatomical structures. The final multi-body models represent valuable virtual platforms to clinical diagnostic and treatment planning.

  12. Demo: Distributed Real-Time Generative 3D Hand Tracking using Edge GPGPU Acceleration

    DEFF Research Database (Denmark)

    Qammaz, Ammar; Kosta, Sokol; Kyriazis, Nikolaos

    2018-01-01

    computations locally. The network connection takes the place of a GPGPU accelerator and sharing resources with a larger workstation becomes the acceleration mechanism. The unique properties of a generative optimizer are examined and constitute a challenging use-case, since the requirement for real......This work demonstrates a real-time 3D hand tracking application that runs via computation offloading. The proposed framework enables the application to run on low-end mobile devices such as laptops and tablets, despite the fact that they lack the sufficient hardware to perform the required...

  13. Eyes on the Earth 3D

    Science.gov (United States)

    Kulikov, anton I.; Doronila, Paul R.; Nguyen, Viet T.; Jackson, Randal K.; Greene, William M.; Hussey, Kevin J.; Garcia, Christopher M.; Lopez, Christian A.

    2013-01-01

    Eyes on the Earth 3D software gives scientists, and the general public, a realtime, 3D interactive means of accurately viewing the real-time locations, speed, and values of recently collected data from several of NASA's Earth Observing Satellites using a standard Web browser (climate.nasa.gov/eyes). Anyone with Web access can use this software to see where the NASA fleet of these satellites is now, or where they will be up to a year in the future. The software also displays several Earth Science Data sets that have been collected on a daily basis. This application uses a third-party, 3D, realtime, interactive game engine called Unity 3D to visualize the satellites and is accessible from a Web browser.

  14. Modreg: A Modular Framework for RGB-D Image Acquisition and 3D Object Model Registration

    Directory of Open Access Journals (Sweden)

    Kornuta Tomasz

    2017-09-01

    Full Text Available RGB-D sensors became a standard in robotic applications requiring object recognition, such as object grasping and manipulation. A typical object recognition system relies on matching of features extracted from RGB-D images retrieved from the robot sensors with the features of the object models. In this paper we present ModReg: a system for registration of 3D models of objects. The system consists of a modular software associated with a multi-camera setup supplemented with an additional pattern projector, used for the registration of high-resolution RGB-D images. The objects are placed on a fiducial board with two dot patterns enabling extraction of masks of the placed objects and estimation of their initial poses. The acquired dense point clouds constituting subsequent object views undergo pairwise registration and at the end are optimized with a graph-based technique derived from SLAM. The combination of all those elements resulted in a system able to generate consistent 3D models of objects.

  15. A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.

    Science.gov (United States)

    Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad

    2012-01-01

    The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications.

  16. Near Real-Time Ground-to-Ground Infrared Remote-Sensing Combination and Inexpensive Visible Camera Observations Applied to Tomographic Stack Emission Measurements

    Directory of Open Access Journals (Sweden)

    Philippe de Donato

    2018-04-01

    Full Text Available Evaluation of the environmental impact of gas plumes from stack emissions at the local level requires precise knowledge of the spatial development of the cloud, its evolution over time, and quantitative analysis of each gaseous component. With extensive developments, remote-sensing ground-based technologies are becoming increasingly relevant to such an application. The difficulty of determining the exact 3-D thickness of the gas plume in real time has meant that the various gas components are mainly expressed using correlation coefficients of gas occurrences and path concentration (ppm.m. This paper focuses on a synchronous and non-expensive multi-angled approach combining three high-resolution visible cameras (GoPro-Hero3 and a scanning infrared (IR gas system (SIGIS, Bruker. Measurements are performed at a NH3 emissive industrial site (NOVACARB Society, Laneuveville-devant-Nancy, France. Visible data images were processed by a first geometrical reconstruction gOcad® protocol to build a 3-D envelope of the gas plume which allows estimation of the plume’s thickness corresponding to the 2-D infrared grid measurements. NH3 concentration data could thereby be expressed in ppm and have been interpolated using a second gOcad® interpolation algorithm allowing a precise volume visualization of the NH3 distribution in the flue gas steam.

  17. Action Recognition Using 3D Histograms of Texture and A Multi-Class Boosting Classifier.

    Science.gov (United States)

    Zhang, Baochang; Yang, Yun; Chen, Chen; Yang, Linlin; Han, Jungong; Shao, Ling

    2017-10-01

    Human action recognition is an important yet challenging task. This paper presents a low-cost descriptor called 3D histograms of texture (3DHoTs) to extract discriminant features from a sequence of depth maps. 3DHoTs are derived from projecting depth frames onto three orthogonal Cartesian planes, i.e., the frontal, side, and top planes, and thus compactly characterize the salient information of a specific action, on which texture features are calculated to represent the action. Besides this fast feature descriptor, a new multi-class boosting classifier (MBC) is also proposed to efficiently exploit different kinds of features in a unified framework for action classification. Compared with the existing boosting frameworks, we add a new multi-class constraint into the objective function, which helps to maintain a better margin distribution by maximizing the mean of margin, whereas still minimizing the variance of margin. Experiments on the MSRAction3D, MSRGesture3D, MSRActivity3D, and UTD-MHAD data sets demonstrate that the proposed system combining 3DHoTs and MBC is superior to the state of the art.

  18. Camera Trajectory fromWide Baseline Images

    Science.gov (United States)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    angle θ of its corresponding rays w.r.t. the optical axis as θ = ar 1+br2 . After a successful calibration, we know the correspondence of the image points to the 3D optical rays in the coordinate system of the camera. The following steps aim at finding the transformation between the camera and the world coordinate systems, i.e. the pose of the camera in the 3D world, using 2D image matches. For computing 3D structure, we construct a set of tentative matches detecting different affine covariant feature regions including MSER, Harris Affine, and Hessian Affine in acquired images. These features are alternative to popular SIFT features and work comparably in our situation. Parameters of the detectors are chosen to limit the number of regions to 1-2 thousands per image. The detected regions are assigned local affine frames (LAF) and transformed into standard positions w.r.t. their LAFs. Discrete Cosine Descriptors are computed for each region in the standard position. Finally, mutual distances of all regions in one image and all regions in the other image are computed as the Euclidean distances of their descriptors and tentative matches are constructed by selecting the mutually closest pairs. Opposed to the methods using short baseline images, simpler image features which are not affine covariant cannot be used because the view point can change a lot between consecutive frames. Furthermore, feature matching has to be performed on the whole frame because no assumptions on the proximity of the consecutive projections can be made for wide baseline images. This is making the feature detection, description, and matching much more time-consuming than it is for short baseline images and limits the usage to low frame rate sequences when operating in real-time. Robust 3D structure can be computed by RANSAC which searches for the largest subset of the set of tentative matches which is, within a predefined threshold ", consistent with an epipolar geometry. We use ordered sampling as

  19. Multi-view 3D human pose estimation combining single-frame recovery, temporal integration and model adaptation

    NARCIS (Netherlands)

    Hofmann, K.M.; Gavrilla, D.M.

    2009-01-01

    We present a system for the estimation of unconstrained 3D human upper body movement from multiple cameras. Its main novelty lies in the integration of three components: single frame pose recovery, temporal integration and model adaptation. Single frame pose recovery consists of a hypothesis

  20. Reconstruction of data for an experiment using multi-gap spark chambers with six-camera optics

    International Nuclear Information System (INIS)

    Maybury, R.; Daley, H.M.

    1983-06-01

    A program has been developed to reconstruct spark positions in a pair of multi-gap optical spark chambers viewed by six cameras, which were used by a Rutherford Laboratory experiment. The procedure for correlating camera views to calculate spark positions is described. Calibration of the apparatus, and the application of time- and intensity-dependent corrections are discussed. (author)

  1. 3D Printed "Earable" Smart Devices for Real-Time Detection of Core Body Temperature.

    Science.gov (United States)

    Ota, Hiroki; Chao, Minghan; Gao, Yuji; Wu, Eric; Tai, Li-Chia; Chen, Kevin; Matsuoka, Yasutomo; Iwai, Kosuke; Fahad, Hossain M; Gao, Wei; Nyein, Hnin Yin Yin; Lin, Liwei; Javey, Ali

    2017-07-28

    Real-time detection of basic physiological parameters such as blood pressure and heart rate is an important target in wearable smart devices for healthcare. Among these, the core body temperature is one of the most important basic medical indicators of fever, insomnia, fatigue, metabolic functionality, and depression. However, traditional wearable temperature sensors are based upon the measurement of skin temperature, which can vary dramatically from the true core body temperature. Here, we demonstrate a three-dimensional (3D) printed wearable "earable" smart device that is designed to be worn on the ear to track core body temperature from the tympanic membrane (i.e., ear drum) based on an infrared sensor. The device is fully integrated with data processing circuits and a wireless module for standalone functionality. Using this smart earable device, we demonstrate that the core body temperature can be accurately monitored regardless of the environment and activity of the user. In addition, a microphone and actuator are also integrated so that the device can also function as a bone conduction hearing aid. Using 3D printing as the fabrication method enables the device to be customized for the wearer for more personalized healthcare. This smart device provides an important advance in realizing personalized health care by enabling real-time monitoring of one of the most important medical parameters, core body temperature, employed in preliminary medical screening tests.

  2. Performance Evaluation of Thermographic Cameras for Photogrammetric Measurements

    Science.gov (United States)

    Yastikli, N.; Guler, E.

    2013-05-01

    The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was modelled efficiently

  3. D Model Visualization Enhancements in Real-Time Game Engines

    Science.gov (United States)

    Merlo, A.; Sánchez Belenguer, C.; Vendrell Vidal, E.; Fantini, F.; Aliperta, A.

    2013-02-01

    This paper describes two procedures used to disseminate tangible cultural heritage through real-time 3D simulations providing accurate-scientific representations. The main idea is to create simple geometries (with low-poly count) and apply two different texture maps to them: a normal map and a displacement map. There are two ways to achieve models that fit with normal or displacement maps: with the former (normal maps), the number of polygons in the reality-based model may be dramatically reduced by decimation algorithms and then normals may be calculated by rendering them to texture solutions (baking). With the latter, a LOD model is needed; its topology has to be quad-dominant for it to be converted to a good quality subdivision surface (with consistent tangency and curvature all over). The subdivision surface is constructed using methodologies for the construction of assets borrowed from character animation: these techniques have been recently implemented in many entertainment applications known as "retopology". The normal map is used as usual, in order to shade the surface of the model in a realistic way. The displacement map is used to finish, in real-time, the flat faces of the object, by adding the geometric detail missing in the low-poly models. The accuracy of the resulting geometry is progressively refined based on the distance from the viewing point, so the result is like a continuous level of detail, the only difference being that there is no need to create different 3D models for one and the same object. All geometric detail is calculated in real-time according to the displacement map. This approach can be used in Unity, a real-time 3D engine originally designed for developing computer games. It provides a powerful rendering engine, fully integrated with a complete set of intuitive tools and rapid workflows that allow users to easily create interactive 3D contents. With the release of Unity 4.0, new rendering features have been added, including Direct

  4. Parallel Implementation of the Multi-Dimensional Spectral Code SPECT3D on large 3D grids.

    Science.gov (United States)

    Golovkin, Igor E.; Macfarlane, Joseph J.; Woodruff, Pamela R.; Pereyra, Nicolas A.

    2006-10-01

    The multi-dimensional collisional-radiative, spectral analysis code SPECT3D can be used to study radiation from complex plasmas. SPECT3D can generate instantaneous and time-gated images and spectra, space-resolved and streaked spectra, which makes it a valuable tool for post-processing hydrodynamics calculations and direct comparison between simulations and experimental data. On large three dimensional grids, transporting radiation along lines of sight (LOS) requires substantial memory and CPU resources. Currently, the parallel option in SPECT3D is based on parallelization over photon frequencies and allows for a nearly linear speed-up for a variety of problems. In addition, we are introducing a new parallel mechanism that will greatly reduce memory requirements. In the new implementation, spatial domain decomposition will be utilized allowing transport along a LOS to be performed only on the mesh cells the LOS crosses. The ability to operate on a fraction of the grid is crucial for post-processing the results of large-scale three-dimensional hydrodynamics simulations. We will present a parallel implementation of the code and provide a scalability study performed on a Linux cluster.

  5. Acquisition, compression and rendering of depth and texture for multi-view video

    NARCIS (Netherlands)

    Morvan, Y.

    2009-01-01

    Three-dimensional (3D) video and imaging technologies is an emerging trend in the development of digital video systems, as we presently witness the appearance of 3D displays, coding systems, and 3D camera setups. Three-dimensional multi-view video is typically obtained from a set of synchronized

  6. 4D Unconstrained Real-time Face Recognition Using a Commodity Depthh Camera

    NARCIS (Netherlands)

    Schimbinschi, Florin; Wiering, Marco; Mohan, R.E.; Sheba, J.K.

    2012-01-01

    Robust unconstrained real-time face recognition still remains a challenge today. The recent addition to the market of lightweight commodity depth sensors brings new possibilities for human-machine interaction and therefore face recognition. This article accompanies the reader through a succinct

  7. Non Invasive 3D Characterization of Materials at Multi scale Resolution in Correlative and 4D microscopy

    International Nuclear Information System (INIS)

    Lau, S.H.

    2011-01-01

    We describe a suite of novel lab-based X-ray computed tomography (CT) systems for high contrast 3D characterization of hard to soft materials with resolution across length scales. The system has similar resolution and contrast range obtained from x-ray micro and nano tomography systems in synchrotron radiation facilities, except it makes use of conventional lab sources. Samples with dimensions from several cm to several microns may be imaged non invasively at varying resolution from tens of microns to 20 nm voxel. The novel multi scale CT helps bridge the resolution, scaling and 3D visualization gap in the traditional destructive 2D imaging modalities such as optical microscopes, AFM, SEM, SEM-FIB and TEM. It provides a direct non-invasive volumetric imaging technique at the macro to nano scale, making it ideal for accurate prediction and modeling of whole systems and components. For example, using 3D visualization, segmentation and computational analysis tools, pore networks, FEA, fluid, thermal and ionic transport in various systems and materials from ceramics, geo materials, composites, metals, and coatings may be characterized and modeled. The high resolution and unique phase contrast features of the novel CTs also lend themselves very well to characterize inherently low contrast soft materials such as polymers; membranes and biological tissue or to differentiate small differences in material and mineral phases in geo material and composites. Tomography of samples may be acquired at different volume vs resolution using local tomography technique, often without sample destruction. In the emerging field of 3D correlative microscopy, these larger CT volumetric data sets can be correlated at the different length scales with conventional 2D imaging modalities. For example, after a CT scan, specimen may undergo destructive sample sectioning at specific region of interest, to obtain the corresponding 2D slices with SEM and TEM or with X-ray microanalysis derive its

  8. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    Science.gov (United States)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  9. 4-D ICE: A 2-D Array Transducer With Integrated ASIC in a 10-Fr Catheter for Real-Time 3-D Intracardiac Echocardiography.

    Science.gov (United States)

    Wildes, Douglas; Lee, Warren; Haider, Bruno; Cogan, Scott; Sundaresan, Krishnakumar; Mills, David M; Yetter, Christopher; Hart, Patrick H; Haun, Christopher R; Concepcion, Mikael; Kirkhorn, Johan; Bitoun, Marc

    2016-12-01

    We developed a 2.5 ×6.6 mm 2 2 -D array transducer with integrated transmit/receive application-specific integrated circuit (ASIC) for real-time 3-D intracardiac echocardiography (4-D ICE) applications. The ASIC and transducer design were optimized so that the high-voltage transmit, low-voltage time-gain control and preamp, subaperture beamformer, and digital control circuits for each transducer element all fit within the 0.019-mm 2 area of the element. The transducer assembly was deployed in a 10-Fr (3.3-mm diameter) catheter, integrated with a GE Vivid E9 ultrasound imaging system, and evaluated in three preclinical studies. The 2-D image quality and imaging modes were comparable to commercial 2-D ICE catheters. The 4-D field of view was at least 90 ° ×60 ° ×8 cm and could be imaged at 30 vol/s, sufficient to visualize cardiac anatomy and other diagnostic and therapy catheters. 4-D ICE should significantly reduce X-ray fluoroscopy use and dose during electrophysiology ablation procedures. 4-D ICE may be able to replace transesophageal echocardiography (TEE), and the associated risks and costs of general anesthesia, for guidance of some structural heart procedures.

  10. A multi-criteria approach to camera motion design for volume data animation.

    Science.gov (United States)

    Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu

    2013-12-01

    We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.

  11. Monitoring the effects of doxorubicin on 3D-spheroid tumor cells in real-time

    Directory of Open Access Journals (Sweden)

    Baek N

    2016-11-01

    Full Text Available NamHuk Baek,1,* Ok Won Seo,1,* MinSung Kim,1 John Hulme,2 Seong Soo A An2 1Department of R & D, NanoEntek Inc., Seoul, Republic of Korea; 2Department of BioNano Technology Gachon University, Gyeonggi-do, Republic of Korea *These authors contributed equally to this work Abstract: Recently, increasing numbers of cell culture experiments with 3D spheroids presented better correlating results in vivo than traditional 2D cell culture systems. 3D spheroids could offer a simple and highly reproducible model that would exhibit many characteristics of natural tissue, such as the production of extracellular matrix. In this paper numerous cell lines were screened and selected depending on their ability to form and maintain a spherical shape. The effects of increasing concentrations of doxorubicin (DXR on the integrity and viability of the selected spheroids were then measured at regular intervals and in real-time. In total 12 cell lines, adenocarcinomic alveolar basal epithelial (A549, muscle (C2C12, prostate (DU145, testis (F9, pituitary epithelial-like (GH3, cervical cancer (HeLa, HeLa contaminant (HEp2, embryo (NIH3T3, embryo (PA317, neuroblastoma (SH-SY5Y, osteosarcoma U2OS, and embryonic kidney cells (293T, were screened. Out of the 12, 8 cell lines, NIH3T3, C2C12, 293T, SH-SY5Y, A549, HeLa, PA317, and U2OS formed regular spheroids and the effects of DXR on these structures were measured at regular intervals. Finally, 5 cell lines, A549, HeLa, SH-SY5Y, U2OS, and 293T, were selected for real-time monitoring and the effects of DXR treatment on their behavior were continuously recorded for 5 days. A potential correlation regarding the effects of DXR on spheroid viability and ATP production was measured on days 1, 3, and 5. Cytotoxicity of DXR seemed to occur after endocytosis, since the cellular activities and ATP productions were still viable after 1 day of the treatment in all spheroids, except SH-SY5Y. Both cellular activity and ATP production were

  12. Carded Tow Real-Time Color Assessment: A Spectral Camera-Based System

    Directory of Open Access Journals (Sweden)

    Rocco Furferi

    2016-08-01

    Full Text Available One of the most important parameters to be controlled during the production of textile yarns obtained by mixing pre-colored fibers, is the color correspondence between the manufactured yarn and a given reference, usually provided by a designer or a customer. Obtaining yarns from raw pre-colored fibers is a complex manufacturing process entailing a number of steps such as laboratory sampling, color recipe corrections, blowing, carding and spinning. Carding process is the one devoted to transform a “fuzzy mass” of tufted fibers into a regular mass of untwisted fibers, named “tow”. During this process, unfortunately, the correspondence between the color of the tow and the target one cannot be assured, thus leading to yarns whose color differs from the one used for reference. To solve this issue, the main aim of this work is to provide a system able to perform a spectral camera-based real-time measurement of a carded tow, to assess its color correspondence with a reference carded fabric and, at the same time, to monitor the overall quality of the tow during the carding process. Tested against a number of differently colored carded fabrics, the proposed system proved its effectiveness in reliably assessing color correspondence in real-time.

  13. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    Science.gov (United States)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  14. Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera

    Science.gov (United States)

    Jhan, Jyun-Ping; Rau, Jiann-Yeou; Haala, Norbert

    2018-03-01

    Utilizing miniature multispectral (MS) or hyperspectral (HS) cameras by mounting them on an Unmanned Aerial System (UAS) has the benefits of convenience and flexibility to collect remote sensing imagery for precision agriculture, vegetation monitoring, and environment investigation applications. Most miniature MS cameras adopt a multi-lens structure to record discrete MS bands of visible and invisible information. The differences in lens distortion, mounting positions, and viewing angles among lenses mean that the acquired original MS images have significant band misregistration errors. We have developed a Robust and Adaptive Band-to-Band Image Transform (RABBIT) method for dealing with the band co-registration of various types of miniature multi-lens multispectral cameras (Mini-MSCs) to obtain band co-registered MS imagery for remote sensing applications. The RABBIT utilizes modified projective transformation (MPT) to transfer the multiple image geometry of a multi-lens imaging system to one sensor geometry, and combines this with a robust and adaptive correction (RAC) procedure to correct several systematic errors and to obtain sub-pixel accuracy. This study applies three state-of-the-art Mini-MSCs to evaluate the RABBIT method's performance, specifically the Tetracam Miniature Multiple Camera Array (MiniMCA), Micasense RedEdge, and Parrot Sequoia. Six MS datasets acquired at different target distances and dates, and locations are also applied to prove its reliability and applicability. Results prove that RABBIT is feasible for different types of Mini-MSCs with accurate, robust, and rapid image processing efficiency.

  15. Real-time interactive 3D manipulation of particles viewed in two orthogonal observation planes

    DEFF Research Database (Denmark)

    Perch-Nielsen, I.; Rodrigo, P.J.; Glückstad, J.

    2005-01-01

    The generalized phase contrast (GPC) method has been applied to transform a single TEM00 beam into a manifold of counterpropagating-beam traps capable of real-time interactive manipulation of multiple microparticles in three dimensions (3D). This paper reports on the use of low numerical aperture...... for imaging through each of the two opposing objective lenses. As a consequence of the large working distance, simultaneous monitoring of the trapped particles in a second orthogonal observation plane is demonstrated. (C) 2005 Optical Society of America....

  16. A 3D virtual reality simulator for training of minimally invasive surgery.

    Science.gov (United States)

    Mi, Shao-Hua; Hou, Zeng-Gunag; Yang, Fan; Xie, Xiao-Liang; Bian, Gui-Bin

    2014-01-01

    For the last decade, remarkable progress has been made in the field of cardiovascular disease treatment. However, these complex medical procedures require a combination of rich experience and technical skills. In this paper, a 3D virtual reality simulator for core skills training in minimally invasive surgery is presented. The system can generate realistic 3D vascular models segmented from patient datasets, including a beating heart, and provide a real-time computation of force and force feedback module for surgical simulation. Instruments, such as a catheter or guide wire, are represented by a multi-body mass-spring model. In addition, a realistic user interface with multiple windows and real-time 3D views are developed. Moreover, the simulator is also provided with a human-machine interaction module that gives doctors the sense of touch during the surgery training, enables them to control the motion of a virtual catheter/guide wire inside a complex vascular model. Experimental results show that the simulator is suitable for minimally invasive surgery training.

  17. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    Energy Technology Data Exchange (ETDEWEB)

    Romps, David [Univ. of California, Berkeley, CA (United States); Oktem, Rusen [Univ. of California, Berkeley, CA (United States)

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together to obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.

  18. Determining fast orientation changes of multi-spectral line cameras from the primary images

    Science.gov (United States)

    Wohlfeil, Jürgen

    2012-01-01

    Fast orientation changes of airborne and spaceborne line cameras cannot always be avoided. In such cases it is essential to measure them with high accuracy to ensure a good quality of the resulting imagery products. Several approaches exist to support the orientation measurement by using optical information received through the main objective/telescope. In this article an approach is proposed that allows the determination of non-systematic orientation changes between every captured line. It does not require any additional camera hardware or onboard processing capabilities but the payload images and a rough estimate of the camera's trajectory. The approach takes advantage of the typical geometry of multi-spectral line cameras with a set of linear sensor arrays for different spectral bands on the focal plane. First, homologous points are detected within the heavily distorted images of different spectral bands. With their help a connected network of geometrical correspondences can be built up. This network is used to calculate the orientation changes of the camera with the temporal and angular resolution of the camera. The approach was tested with an extensive set of aerial surveys covering a wide range of different conditions and achieved precise and reliable results.

  19. Camera pose refinement by matching uncertain 3D building models with thermal infrared image sequences for high quality texture extraction

    Science.gov (United States)

    Iwaszczuk, Dorota; Stilla, Uwe

    2017-10-01

    Thermal infrared (TIR) images are often used to picture damaged and weak spots in the insulation of the building hull, which is widely used in thermal inspections of buildings. Such inspection in large-scale areas can be carried out by combining TIR imagery and 3D building models. This combination can be achieved via texture mapping. Automation of texture mapping avoids time consuming imaging and manually analyzing each face independently. It also provides a spatial reference for façade structures extracted in the thermal textures. In order to capture all faces, including the roofs, façades, and façades in the inner courtyard, an oblique looking camera mounted on a flying platform is used. Direct geo-referencing is usually not sufficient for precise texture extraction. In addition, 3D building models have also uncertain geometry. In this paper, therefore, methodology for co-registration of uncertain 3D building models with airborne oblique view images is presented. For this purpose, a line-based model-to-image matching is developed, in which the uncertainties of the 3D building model, as well as of the image features are considered. Matched linear features are used for the refinement of the exterior orientation parameters of the camera in order to ensure optimal co-registration. Moreover, this study investigates whether line tracking through the image sequence supports the matching. The accuracy of the extraction and the quality of the textures are assessed. For this purpose, appropriate quality measures are developed. The tests showed good results on co-registration, particularly in cases where tracking between the neighboring frames had been applied.

  20. Review of Real-Time 3-Dimensional Image Guided Radiation Therapy on Standard-Equipped Cancer Radiation Therapy Systems: Are We at the Tipping Point for the Era of Real-Time Radiation Therapy?

    Science.gov (United States)

    Keall, Paul J; Nguyen, Doan Trang; O'Brien, Ricky; Zhang, Pengpeng; Happersett, Laura; Bertholet, Jenny; Poulsen, Per R

    2018-04-14

    To review real-time 3-dimensional (3D) image guided radiation therapy (IGRT) on standard-equipped cancer radiation therapy systems, focusing on clinically implemented solutions. Three groups in 3 continents have clinically implemented novel real-time 3D IGRT solutions on standard-equipped linear accelerators. These technologies encompass kilovoltage, combined megavoltage-kilovoltage, and combined kilovoltage-optical imaging. The cancer sites treated span pelvic and abdominal tumors for which respiratory motion is present. For each method the 3D-measured motion during treatment is reported. After treatment, dose reconstruction was used to assess the treatment quality in the presence of motion with and without real-time 3D IGRT. The geometric accuracy was quantified through phantom experiments. A literature search was conducted to identify additional real-time 3D IGRT methods that could be clinically implemented in the near future. The real-time 3D IGRT methods were successfully clinically implemented and have been used to treat more than 200 patients. Systematic target position shifts were observed using all 3 methods. Dose reconstruction demonstrated that the delivered dose is closer to the planned dose with real-time 3D IGRT than without real-time 3D IGRT. In addition, compromised target dose coverage and variable normal tissue doses were found without real-time 3D IGRT. The geometric accuracy results with real-time 3D IGRT had a mean error of real-time 3D IGRT methods using standard-equipped radiation therapy systems that could also be clinically implemented. Multiple clinical implementations of real-time 3D IGRT on standard-equipped cancer radiation therapy systems have been demonstrated. Many more approaches that could be implemented were identified. These solutions provide a pathway for the broader adoption of methods to make radiation therapy more accurate, impacting tumor and normal tissue dose, margins, and ultimately patient outcomes. Copyright © 2018

  1. The Smartphone Brain Scanner: A Portable Real-Time Neuroimaging System

    DEFF Research Database (Denmark)

    Stopczynski, Arkadiusz; Stahlhut, Carsten; Larsen, Jakob Eg

    2014-01-01

    Combining low-cost wireless EEG sensors with smartphones offers novel opportunities for mobile brain imaging in an everyday context. Here we present the technical details and validation of a framework for building multi-platform, portable EEG applications with real-time 3D source reconstruction....... The system – Smartphone Brain Scanner – combines an off-the-shelf neuroheadset or EEG cap with a smartphone or tablet, and as such represents the first fully portable system for real-time 3D EEG imaging. We discuss the benefits and challenges, including technical limitations as well as details of real...

  2. Real-time tricolor phase measuring profilometry based on CCD sensitivity calibration

    Science.gov (United States)

    Zhu, Lin; Cao, Yiping; He, Dawu; Chen, Cheng

    2017-02-01

    A real-time tricolor phase measuring profilometry (RTPMP) based on charge coupled device (CCD) sensitivity calibration is proposed. Only one colour fringe pattern whose red (R), green (G) and blue (B) components are, respectively, coded as three sinusoidal phase-shifting gratings with an equivalent shifting phase of 2π/3 is needed and sent to an appointed flash memory on a specialized digital light projector (SDLP). A specialized time-division multiplexing timing sequence actively controls the SDLP to project the fringe patterns in R, G and B channels sequentially onto the measured object in one over seventy-two of a second and meanwhile actively controls a high frame rate monochrome CCD camera to capture the corresponding deformed patterns synchronously with the SDLP. So the sufficient information for reconstructing the three-dimensional (3D) shape in one over twenty-four of a second is obtained. Due to the different spectral sensitivity of the CCD camera to RGB lights, the captured deformed patterns from R, G and B channels cannot share the same peak and valley, which will lead to lower accuracy or even failing to reconstruct the 3D shape. So a deformed pattern amending method based on CCD sensitivity calibration is developed to guarantee the accurate 3D reconstruction. The experimental results verify the feasibility of the proposed RTPMP method. The proposed RTPMP method can obtain the 3D shape at over the video frame rate of 24 frames per second, avoid the colour crosstalk completely and be effective for measuring real-time changing object.

  3. 3D Cloud Tomography, Followed by Mean Optical and Microphysical Properties, with Multi-Angle/Multi-Pixel Data

    Science.gov (United States)

    Davis, A. B.; von Allmen, P. A.; Marshak, A.; Bal, G.

    2010-12-01

    The geometrical assumption in all operational cloud remote sensing algorithms is that clouds are plane-parallel slabs, which applies relatively well to the most uniform stratus layers. Its benefit is to justify using classic 1D radiative transfer (RT) theory, where angular details (solar, viewing, azimuthal) are fully accounted for and precise phase functions can be used, to generate the look-up tables used in the retrievals. Unsurprisingly, these algorithms catastrophically fail when applied to cumulus-type clouds, which are highly 3D. This is unfortunate for the cloud-process modeling community that may thrive on in situ airborne data, but would very much like to use satellite data for more than illustrations in their presentations and publications. So, how can we obtain quantitative information from space-based observations of finite aspect ratio clouds? Cloud base/top heights, vertically projected area, mean liquid water content (LWC), and volume-averaged droplet size would be a good start. Motivated by this science need, we present a new approach suitable for sparse cumulus fields where we turn the tables on the standard procedure in cloud remote sensing. We make no a priori assumption about cloud shape, save an approximately flat base, but use brutal approximations about the RT that is necessarily 3D. Indeed, the first order of business is to roughly determine the cloud's outer shape in one of two ways, which we will frame as competing initial guesses for the next phase of shape refinement and volume-averaged microphysical parameter estimation. Both steps use multi-pixel/multi-angle techniques amenable to MISR data, the latter adding a bi-spectral dimension using collocated MODIS data. One approach to rough cloud shape determination is to fit the multi-pixel/multi-angle data with a geometric primitive such as a scalene hemi-ellipsoid with 7 parameters (translation in 3D space, 3 semi-axes, 1 azimuthal orientation); for the radiometry, a simple radiosity

  4. 3D reconstruction based on light field images

    Science.gov (United States)

    Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei

    2018-04-01

    This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.

  5. IMPROVEMENT OF 3D MONTE CARLO LOCALIZATION USING A DEPTH CAMERA AND TERRESTRIAL LASER SCANNER

    Directory of Open Access Journals (Sweden)

    S. Kanai

    2015-05-01

    Full Text Available Effective and accurate localization method in three-dimensional indoor environments is a key requirement for indoor navigation and lifelong robotic assistance. So far, Monte Carlo Localization (MCL has given one of the promising solutions for the indoor localization methods. Previous work of MCL has been mostly limited to 2D motion estimation in a planar map, and a few 3D MCL approaches have been recently proposed. However, their localization accuracy and efficiency still remain at an unsatisfactory level (a few hundreds millimetre error at up to a few FPS or is not fully verified with the precise ground truth. Therefore, the purpose of this study is to improve an accuracy and efficiency of 6DOF motion estimation in 3D MCL for indoor localization. Firstly, a terrestrial laser scanner is used for creating a precise 3D mesh model as an environment map, and a professional-level depth camera is installed as an outer sensor. GPU scene simulation is also introduced to upgrade the speed of prediction phase in MCL. Moreover, for further improvement, GPGPU programming is implemented to realize further speed up of the likelihood estimation phase, and anisotropic particle propagation is introduced into MCL based on the observations from an inertia sensor. Improvements in the localization accuracy and efficiency are verified by the comparison with a previous MCL method. As a result, it was confirmed that GPGPU-based algorithm was effective in increasing the computational efficiency to 10-50 FPS when the number of particles remain below a few hundreds. On the other hand, inertia sensor-based algorithm reduced the localization error to a median of 47mm even with less number of particles. The results showed that our proposed 3D MCL method outperforms the previous one in accuracy and efficiency.

  6. PERFORMANCE EVALUATION OF THERMOGRAPHIC CAMERAS FOR PHOTOGRAMMETRIC MEASUREMENTS

    Directory of Open Access Journals (Sweden)

    N. Yastikli

    2013-05-01

    Full Text Available The aim of this research is the performance evaluation of the termographic cameras for possible use for photogrammetric documentation and deformation analyses caused by moisture and isolation problem of the historical and cultural heritage. To perform geometric calibration of the termographic camera, the 3D test object was designed with 77 control points which were distributed in different depths. For performance evaluation, Flir A320 termographic camera with 320 × 240 pixels and lens with 18 mm focal length was used. The Nikon D3X SLR digital camera with 6048 × 4032 pixels and lens with 20 mm focal length was used as reference for comparison. The size of pixel was 25 μm for the Flir A320 termographic camera and 6 μm for the Nikon D3X SLR digital camera. The digital images of the 3D test object were recorded with the Flir A320 termographic camera and Nikon D3X SLR digital camera and the image coordinate of the control points in the images were measured. The geometric calibration parameters, including the focal length, position of principal points, radial and tangential distortions were determined with introduced additional parameters in bundle block adjustments. The measurement of image coordinates and bundle block adjustments with additional parameters were performed using the PHIDIAS digital photogrammetric system. The bundle block adjustment was repeated with determined calibration parameter for both Flir A320 termographic camera and Nikon D3X SLR digital camera. The obtained standard deviation of measured image coordinates was 9.6 μm and 10.5 μm for Flir A320 termographic camera and 8.3 μm and 7.7 μm for Nikon D3X SLR digital camera. The obtained standard deviation of measured image points in Flir A320 termographic camera images almost same accuracy level with digital camera in comparison with 4 times bigger pixel size. The obtained results from this research, the interior geometry of the termographic cameras and lens distortion was

  7. 3D tomographic imaging with the γ-eye planar scintigraphic gamma camera

    Science.gov (United States)

    Tunnicliffe, H.; Georgiou, M.; Loudos, G. K.; Simcox, A.; Tsoumpas, C.

    2017-11-01

    γ-eye is a desktop planar scintigraphic gamma camera (100 mm × 50 mm field of view) designed by BET Solutions as an affordable tool for dynamic, whole body, small-animal imaging. This investigation tests the viability of using γ-eye for the collection of tomographic data for 3D SPECT reconstruction. Two software packages, QSPECT and STIR (software for tomographic image reconstruction), have been compared. Reconstructions have been performed using QSPECT’s implementation of the OSEM algorithm and STIR’s OSMAPOSL (Ordered Subset Maximum A Posteriori One Step Late) and OSSPS (Ordered Subsets Separable Paraboloidal Surrogate) algorithms. Reconstructed images of phantom and mouse data have been assessed in terms of spatial resolution, sensitivity to varying activity levels and uniformity. The effect of varying the number of iterations, the voxel size (1.25 mm default voxel size reduced to 0.625 mm and 0.3125 mm), the point spread function correction and the weight of prior terms were explored. While QSPECT demonstrated faster reconstructions, STIR outperformed it in terms of resolution (as low as 1 mm versus 3 mm), particularly when smaller voxel sizes were used, and in terms of uniformity, particularly when prior terms were used. Little difference in terms of sensitivity was seen throughout.

  8. UCalMiCeL – UNIFIED INTRINSIC AND EXTRINSIC CALIBRATION OF A MULTI-CAMERA-SYSTEM AND A LASERSCANNER

    Directory of Open Access Journals (Sweden)

    M. Hillemann

    2017-08-01

    Full Text Available Unmanned Aerial Vehicle (UAV with adequate sensors enable new applications in the scope between expensive, large-scale, aircraftcarried remote sensing and time-consuming, small-scale, terrestrial surveyings. To perform these applications, cameras and laserscanners are a good sensor combination, due to their complementary properties. To exploit this sensor combination the intrinsics and relative poses of the individual cameras and the relative poses of the cameras and the laserscanners have to be known. In this manuscript, we present a calibration methodology for the Unified Intrinsic and Extrinsic Calibration of a Multi-Camera-System and a Laserscanner (UCalMiCeL. The innovation of this methodology, which is an extension to the calibration of a single camera to a line laserscanner, is an unifying bundle adjustment step to ensure an optimal calibration of the entire sensor system. We use generic camera models, including pinhole, omnidirectional and fisheye cameras. For our approach, the laserscanner and each camera have to share a joint field of view, whereas the fields of view of the individual cameras may be disjoint. The calibration approach is tested with a sensor system consisting of two fisheye cameras and a line laserscanner with a range measuring accuracy of 30 mm. We evaluate the estimated relative poses between the cameras quantitatively by using an additional calibration approach for Multi-Camera-Systems based on control points which are accurately measured by a motion capture system. In the experiments, our novel calibration method achieves a relative pose estimation with a deviation below 1.8° and 6.4 mm.

  9. Hierarchical programming language for modal multi-rate real-time stream processing applications

    NARCIS (Netherlands)

    Geuns, S.J.; Hausmans, J.P.H.M.; Bekooij, Marco Jan Gerrit

    2014-01-01

    Modal multi-rate stream processing applications with real-time constraints which are executed on multi-core embedded systems often cannot be conveniently specified using current programming languages. An important issue is that sequential programming languages do not allow for convenient programming

  10. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

    Science.gov (United States)

    Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir

    2017-01-01

    Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659

  11. A MULTI-WAVELENGTH 3D MODEL OF BD+30°3639

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, M. J.; Kastner, Joel H. [Chester F. Carlson Center for Imaging Science, School of Physics and Astronomy, and Laboratory for Multiwavelength Astrophysics, Rochester Institute of Technology, 54 Lomb Memorial Drive, Rochester NY 14623 (United States)

    2016-10-01

    We present a 3D multi-wavelength reconstruction of BD+30°3639, one of the best-studied planetary nebulae in the solar neighborhood. BD+30°3639, which hosts a [WR]-type central star, has been imaged at wavelength regimes that span the electromagnetic spectrum, from radio to X-rays. We have used the astrophysical modeling software SHAPE to construct a 3D morpho-kinematic model of BD+30°3639. This reconstruction represents the most complete 3D model of a PN to date from the standpoint of the incorporation of multi-wavelength data. Based on previously published kinematic data in optical emission lines and in lines of CO (radio) and H{sub 2} (near-IR), we were able to reconstruct BD+30's basic velocity components assuming a set of homologous velocity expansion laws combined with collimated flows along the major axis of the nebula. We confirm that the CO “bullets” in the PN lie along an axis that is slightly misaligned with respect to the major axis of the optical nebula, and that these bullets are likely responsible for the disrupted structures of the ionized and H{sub 2}-emitting shells within BD+30. Given the relative geometries and thus dynamical ages of BD+30's main structural components, it is furthermore possible that the same jets that ejected the CO bullets are responsible for the generation of the X-ray-emitting hot bubble within the PN. Comparison of alternative viewing geometries for our 3D reconstruction of BD+30°3639 with imagery of NGC 40 and NGC 6720 suggests a common evolutionary path for these nebulae.

  12. Design and Implementation of Real-Time Vehicular Camera for Driver Assistance and Traffic Congestion Estimation.

    Science.gov (United States)

    Son, Sanghyun; Baek, Yunju

    2015-08-18

    As society has developed, the number of vehicles has increased and road conditions have become complicated, increasing the risk of crashes. Therefore, a service that provides safe vehicle control and various types of information to the driver is urgently needed. In this study, we designed and implemented a real-time traffic information system and a smart camera device for smart driver assistance systems. We selected a commercial device for the smart driver assistance systems, and applied a computer vision algorithm to perform image recognition. For application to the dynamic region of interest, dynamic frame skip methods were implemented to perform parallel processing in order to enable real-time operation. In addition, we designed and implemented a model to estimate congestion by analyzing traffic information. The performance of the proposed method was evaluated using images of a real road environment. We found that the processing time improved by 15.4 times when all the proposed methods were applied in the application. Further, we found experimentally that there was little or no change in the recognition accuracy when the proposed method was applied. Using the traffic congestion estimation model, we also found that the average error rate of the proposed model was 5.3%.

  13. Design and Implementation of Real-Time Vehicular Camera for Driver Assistance and Traffic Congestion Estimation

    Directory of Open Access Journals (Sweden)

    Sanghyun Son

    2015-08-01

    Full Text Available As society has developed, the number of vehicles has increased and road conditions have become complicated, increasing the risk of crashes. Therefore, a service that provides safe vehicle control and various types of information to the driver is urgently needed. In this study, we designed and implemented a real-time traffic information system and a smart camera device for smart driver assistance systems. We selected a commercial device for the smart driver assistance systems, and applied a computer vision algorithm to perform image recognition. For application to the dynamic region of interest, dynamic frame skip methods were implemented to perform parallel processing in order to enable real-time operation. In addition, we designed and implemented a model to estimate congestion by analyzing traffic information. The performance of the proposed method was evaluated using images of a real road environment. We found that the processing time improved by 15.4 times when all the proposed methods were applied in the application. Further, we found experimentally that there was little or no change in the recognition accuracy when the proposed method was applied. Using the traffic congestion estimation model, we also found that the average error rate of the proposed model was 5.3%.

  14. Real-time data acquisition and control system for the 349-pixel TACTIC atmospheric Cherenkov imaging telescope

    Energy Technology Data Exchange (ETDEWEB)

    Yadav, K.K.; Koul, R.; Kanda, A.; Kaul, S.R.; Tickoo, A.K. E-mail: aktickoo@apsara.barc.ernet.in; Rannot, R.C.; Chandra, P.; Bhatt, N.; Chouhan, N.; Venugopal, K.; Kothari, M.; Goyal, H.C.; Dhar, V.K.; Kaul, S.K

    2004-07-21

    An interrupt-based multinode data acquisition and control system has been developed for the imaging element of the TACTIC {gamma}-ray telescope. The system which has been designed around a 3-node network of PCs running the QNX real-time operating system, provides single-point control with elaborate GUI facilities for operating the multi-pixel camera of the telescope. In addition to acquiring data from the 349-pixel photomultiplier tube based imaging camera in real time, the system also provides continuous monitoring and control of several vital parameters of the telescope for ensuring the quality of the data. The paper describes the salient features of the hardware and software of the data acquisition and control system of the telescope.

  15. Divergence-ratio axi-vision camera (Divcam): A distance mapping camera

    International Nuclear Information System (INIS)

    Iizuka, Keigo

    2006-01-01

    A novel distance mapping camera the divergence-ratio axi-vision camera (Divcam) is proposed. The decay rate of the illuminating light with distance due to the divergence of the light is used as means of mapping the distance. Resolutions of 10 mm over a range of meters and 0.5 mm over a range of decimeters were achieved. The special features of this camera are its high resolution real-time operation, simplicity, compactness, light weight, portability, and yet low fabrication cost. The feasibility of various potential applications is also included

  16. Exploiting Auto-Collimation for Real-Time Onboard Monitoring of Space Optical Camera Geometric Parameters

    Science.gov (United States)

    Liu, W.; Wang, H.; Liu, D.; Miu, Y.

    2018-05-01

    Precise geometric parameters are essential to ensure the positioning accuracy for space optical cameras. However, state-of-the-art onorbit calibration method inevitably suffers from long update cycle and poor timeliness performance. To this end, in this paper we exploit the optical auto-collimation principle and propose a real-time onboard calibration scheme for monitoring key geometric parameters. Specifically, in the proposed scheme, auto-collimation devices are first designed by installing collimated light sources, area-array CCDs, and prisms inside the satellite payload system. Through utilizing those devices, the changes in the geometric parameters are elegantly converted into changes in the spot image positions. The variation of geometric parameters can be derived via extracting and processing the spot images. An experimental platform is then set up to verify the feasibility and analyze the precision index of the proposed scheme. The experiment results demonstrate that it is feasible to apply the optical auto-collimation principle for real-time onboard monitoring.

  17. SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry

    Energy Technology Data Exchange (ETDEWEB)

    Kotoku, J; Nakabayashi, S; Kumagai, S; Ishibashi, T; Kobayashi, T [Teikyo University, Itabashi-ku, Tokyo (Japan); Haga, A; Saotome, N [University of Tokyo Hospital, Bunkyo-ku, Tokyo (Japan); Arai, N [Teikyo University Hospital, Itabashi-ku, Tokyo (Japan)

    2014-06-01

    Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image. We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)

  18. SU-E-J-128: 3D Surface Reconstruction of a Patient Using Epipolar Geometry

    International Nuclear Information System (INIS)

    Kotoku, J; Nakabayashi, S; Kumagai, S; Ishibashi, T; Kobayashi, T; Haga, A; Saotome, N; Arai, N

    2014-01-01

    Purpose: To obtain a 3D surface data of a patient in a non-invasive way can substantially reduce the effort for the registration of patient in radiation therapy. To achieve this goal, we introduced the multiple view stereo technique, which is known to be used in a 'photo tourism' on the internet. Methods: 70 Images were taken with a digital single-lens reflex camera from different angles and positions. The camera positions and angles were inferred later in the reconstruction step. A sparse 3D reconstruction model was locating by SIFT features, which is robust for rotation and shift variance, in each image. We then found a set of correspondences between pairs of images by computing the fundamental matrix using the eight-point algorithm with RANSAC. After the pair matching, we optimized the parameter including camera positions to minimize the reprojection error by use of bundle adjustment technique (non-linear optimization). As a final step, we performed dense reconstruction and associate a color with each point using the library of PMVS. Results: Surface data were reconstructed well by visual inspection. The human skin is reconstructed well, althogh the reconstruction was time-consuming for direct use in daily clinical practice. Conclusion: 3D reconstruction using multi view stereo geometry is a promising tool for reducing the effort of patient setup. This work was supported by JSPS KAKENHI(25861128)

  19. PulseCam: high-resolution blood perfusion imaging using a camera and a pulse oximeter.

    Science.gov (United States)

    Kumar, Mayank; Suliburk, James; Veeraraghavan, Ashok; Sabharwal, Ashutosh

    2016-08-01

    Measuring blood perfusion is important in medical care as an indicator of injury and disease. However, currently available devices to measure blood perfusion like laser Doppler flowmetry are bulky, expensive, and cumbersome to use. An alternative low-cost and portable camera-based blood perfusion measurement system has recently been proposed, but such camera-only system produces noisy low-resolution blood perfusion maps. In this paper, we propose a new multi-sensor modality, named PulseCam, for measuring blood perfusion by combining a traditional pulse oximeter with a video camera in a unique way to provide low noise and high-resolution blood perfusion maps. Our proposed multi-sensor modality improves per pixel signal to noise ratio of measured perfusion map by up to 3 dB and improves the spatial resolution by 2 - 3 times compared to best known camera-only methods. Blood perfusion measured in the palm using our PulseCam setup during a post-occlusive reactive hyperemia (PORH) test replicates standard PORH response curve measured using laser Doppler flowmetry device but with much lower cost and a portable setup making it suitable for further development as a clinical device.

  20. Computer vision camera with embedded FPGA processing

    Science.gov (United States)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  1. Robust Real-Time Tracking for Visual Surveillance

    Directory of Open Access Journals (Sweden)

    Aguilera Josep

    2007-01-01

    Full Text Available This paper describes a real-time multi-camera surveillance system that can be applied to a range of application domains. This integrated system is designed to observe crowded scenes and has mechanisms to improve tracking of objects that are in close proximity. The four component modules described in this paper are (i motion detection using a layered background model, (ii object tracking based on local appearance, (iii hierarchical object recognition, and (iv fused multisensor object tracking using multiple features and geometric constraints. This integrated approach to complex scene tracking is validated against a number of representative real-world scenarios to show that robust, real-time analysis can be performed.

  2. 3D Digital Surveying and Modelling of Cave Geometry: Application to Paleolithic Rock Art.

    Science.gov (United States)

    González-Aguilera, Diego; Muñoz-Nieto, Angel; Gómez-Lahoz, Javier; Herrero-Pascual, Jesus; Gutierrez-Alonso, Gabriel

    2009-01-01

    3D digital surveying and modelling of cave geometry represents a relevant approach for research, management and preservation of our cultural and geological legacy. In this paper, a multi-sensor approach based on a terrestrial laser scanner, a high-resolution digital camera and a total station is presented. Two emblematic caves of Paleolithic human occupation and situated in northern Spain, "Las Caldas" and "Peña de Candamo", have been chosen to put in practise this approach. As a result, an integral and multi-scalable 3D model is generated which may allow other scientists, pre-historians, geologists…, to work on two different levels, integrating different Paleolithic Art datasets: (1) a basic level based on the accurate and metric support provided by the laser scanner; and (2) a advanced level using the range and image-based modelling.

  3. Multi-stage 3D-2D registration for correction of anatomical deformation in image-guided spine surgery

    Science.gov (United States)

    Ketcha, M. D.; De Silva, T.; Uneri, A.; Jacobson, M. W.; Goerres, J.; Kleinszig, G.; Vogt, S.; Wolinsky, J.-P.; Siewerdsen, J. H.

    2017-06-01

    A multi-stage image-based 3D-2D registration method is presented that maps annotations in a 3D image (e.g. point labels annotating individual vertebrae in preoperative CT) to an intraoperative radiograph in which the patient has undergone non-rigid anatomical deformation due to changes in patient positioning or due to the intervention itself. The proposed method (termed msLevelCheck) extends a previous rigid registration solution (LevelCheck) to provide an accurate mapping of vertebral labels in the presence of spinal deformation. The method employs a multi-stage series of rigid 3D-2D registrations performed on sets of automatically determined and increasingly localized sub-images, with the final stage achieving a rigid mapping for each label to yield a locally rigid yet globally deformable solution. The method was evaluated first in a phantom study in which a CT image of the spine was acquired followed by a series of 7 mobile radiographs with increasing degree of deformation applied. Second, the method was validated using a clinical data set of patients exhibiting strong spinal deformation during thoracolumbar spine surgery. Registration accuracy was assessed using projection distance error (PDE) and failure rate (PDE  >  20 mm—i.e. label registered outside vertebra). The msLevelCheck method was able to register all vertebrae accurately for all cases of deformation in the phantom study, improving the maximum PDE of the rigid method from 22.4 mm to 3.9 mm. The clinical study demonstrated the feasibility of the approach in real patient data by accurately registering all vertebral labels in each case, eliminating all instances of failure encountered in the conventional rigid method. The multi-stage approach demonstrated accurate mapping of vertebral labels in the presence of strong spinal deformation. The msLevelCheck method maintains other advantageous aspects of the original LevelCheck method (e.g. compatibility with standard clinical workflow, large

  4. MULTI SENSOR DATA INTEGRATION FOR AN ACCURATE 3D MODEL GENERATION

    Directory of Open Access Journals (Sweden)

    S. Chhatkuli

    2015-05-01

    Full Text Available The aim of this paper is to introduce a novel technique of data integration between two different data sets, i.e. laser scanned RGB point cloud and oblique imageries derived 3D model, to create a 3D model with more details and better accuracy. In general, aerial imageries are used to create a 3D city model. Aerial imageries produce an overall decent 3D city models and generally suit to generate 3D model of building roof and some non-complex terrain. However, the automatically generated 3D model, from aerial imageries, generally suffers from the lack of accuracy in deriving the 3D model of road under the bridges, details under tree canopy, isolated trees, etc. Moreover, the automatically generated 3D model from aerial imageries also suffers from undulated road surfaces, non-conforming building shapes, loss of minute details like street furniture, etc. in many cases. On the other hand, laser scanned data and images taken from mobile vehicle platform can produce more detailed 3D road model, street furniture model, 3D model of details under bridge, etc. However, laser scanned data and images from mobile vehicle are not suitable to acquire detailed 3D model of tall buildings, roof tops, and so forth. Our proposed approach to integrate multi sensor data compensated each other’s weakness and helped to create a very detailed 3D model with better accuracy. Moreover, the additional details like isolated trees, street furniture, etc. which were missing in the original 3D model derived from aerial imageries could also be integrated in the final model automatically. During the process, the noise in the laser scanned data for example people, vehicles etc. on the road were also automatically removed. Hence, even though the two dataset were acquired in different time period the integrated data set or the final 3D model was generally noise free and without unnecessary details.

  5. NEW METHOD FOR THE CALIBRATION OF MULTI-CAMERA MOBILE MAPPING SYSTEMS

    Directory of Open Access Journals (Sweden)

    A. P. Kersting

    2012-07-01

    Full Text Available Mobile Mapping Systems (MMS allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS, which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP: the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data. In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.

  6. New Method for the Calibration of Multi-Camera Mobile Mapping Systems

    Science.gov (United States)

    Kersting, A. P.; Habib, A.; Rau, J.

    2012-07-01

    Mobile Mapping Systems (MMS) allow for fast and cost-effective collection of geo-spatial information. Such systems integrate a set of imaging sensors and a position and orientation system (POS), which entails GPS and INS units. System calibration is a crucial process to ensure the attainment of the expected accuracy of such systems. It involves the calibration of the individual sensors as well as the calibration of the mounting parameters relating the system components. The mounting parameters of multi-camera MMS include two sets of relative orientation parameters (ROP): the lever arm offsets and the boresight angles relating the cameras and the IMU body frame and the ROP among the cameras (in the absence of GPS/INS data). In this paper, a novel single-step calibration method, which has the ability of estimating these two sets of ROP, is devised. Besides the ability to estimate the ROP among the cameras, the proposed method can use such parameters as prior information in the ISO procedure. The implemented procedure consists of an integrated sensor orientation (ISO) where the GPS/INS-derived position and orientation and the system mounting parameters are directly incorporated in the collinearity equations. The concept of modified collinearity equations has been used by few authors for single-camera systems. In this paper, a new modification to the collinearity equations for GPS/INS-assisted multicamera systems is introduced. Experimental results using a real dataset demonstrate the feasibility of the proposed method.

  7. Camera-Model Identification Using Markovian Transition Probability Matrix

    Science.gov (United States)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  8. C.C.D. readout of a picosecond streak camera with an intensified C.C.D

    International Nuclear Information System (INIS)

    Lemonier, M.; Richard, J.C.; Cavailler, C.; Mens, A.; Raze, G.

    1984-08-01

    This paper deals with a digital streak camera readout device. The device consists in a low light level television camera made of a solid state C.C.D. array coupled to an image intensifier associated to a video-digitizer coupled to a micro-computer system. The streak camera images are picked-up as a video signal, digitized and stored. This system allows the fast recording and the automatic processing of the data provided by the streak tube

  9. Spacecraft 3D Augmented Reality Mobile App

    Science.gov (United States)

    Hussey, Kevin J.; Doronila, Paul R.; Kumanchik, Brian E.; Chan, Evan G.; Ellison, Douglas J.; Boeck, Andrea; Moore, Justin M.

    2013-01-01

    The Spacecraft 3D application allows users to learn about and interact with iconic NASA missions in a new and immersive way using common mobile devices. Using Augmented Reality (AR) techniques to project 3D renditions of the mission spacecraft into real-world surroundings, users can interact with and learn about Curiosity, GRAIL, Cassini, and Voyager. Additional updates on future missions, animations, and information will be ongoing. Using a printed AR Target and camera on a mobile device, users can get up close with these robotic explorers, see how some move, and learn about these engineering feats, which are used to expand knowledge and understanding about space. The software receives input from the mobile device's camera to recognize the presence of an AR marker in the camera's field of view. It then displays a 3D rendition of the selected spacecraft in the user's physical surroundings, on the mobile device's screen, while it tracks the device's movement in relation to the physical position of the spacecraft's 3D image on the AR marker.

  10. Reconfiguration in FPGA-Based Multi-Core Platforms for Hard Real-Time Applications

    DEFF Research Database (Denmark)

    Pezzarossa, Luca; Schoeberl, Martin; Sparsø, Jens

    2016-01-01

    -case execution-time of tasks of an application that determines the systems ability to respond in time. To support this focus, the platform must provide service guarantees for both communication and computation resources. In addition, many hard real-time applications have multiple modes of operation, and each......In general-purpose computing multi-core platforms, hardware accelerators and reconfiguration are means to improve performance; i.e., the average-case execution time of a software application. In hard real-time systems, such average-case speed-up is not in itself relevant - it is the worst...... mode has specific requirements. An interesting perspective on reconfigurable computing is to exploit run-time reconfiguration to support mode changes. In this paper we explore approaches to reconfiguration of communication and computation resources in the T-CREST hard real-time multi-core platform...

  11. The Use of IMMUs in a Water Environment: Instrument Validation and Application of 3D Multi-Body Kinematic Analysis in Medicine and Sport.

    Science.gov (United States)

    Mangia, Anna Lisa; Cortesi, Matteo; Fantozzi, Silvia; Giovanardi, Andrea; Borra, Davide; Gatta, Giorgio

    2017-04-22

    The aims of the present study were the instrumental validation of inertial-magnetic measurements units (IMMUs) in water, and the description of their use in clinical and sports aquatic applications applying customized 3D multi-body models. Firstly, several tests were performed to map the magnetic field in the swimming pool and to identify the best volume for experimental test acquisition with a mean dynamic orientation error lower than 5°. Successively, the gait and the swimming analyses were explored in terms of spatiotemporal and joint kinematics variables. The extraction of only spatiotemporal parameters highlighted several critical issues and the joint kinematic information has shown to be an added value for both rehabilitative and sport training purposes. Furthermore, 3D joint kinematics applied using the IMMUs provided similar quantitative information than that of more expensive and bulky systems but with a simpler and faster setup preparation, a lower time consuming processing phase, as well as the possibility to record and analyze a higher number of strides/strokes without limitations imposed by the cameras.

  12. Monoplane 3D-2D registration of cerebral angiograms based on multi-objective stratified optimization

    Science.gov (United States)

    Aksoy, T.; Špiclin, Ž.; Pernuš, F.; Unal, G.

    2017-12-01

    Registration of 3D pre-interventional to 2D intra-interventional medical images has an increasingly important role in surgical planning, navigation and treatment, because it enables the physician to co-locate depth information given by pre-interventional 3D images with the live information in intra-interventional 2D images such as x-ray. Most tasks during image-guided interventions are carried out under a monoplane x-ray, which is a highly ill-posed problem for state-of-the-art 3D to 2D registration methods. To address the problem of rigid 3D-2D monoplane registration we propose a novel multi-objective stratified parameter optimization, wherein a small set of high-magnitude intensity gradients are matched between the 3D and 2D images. The stratified parameter optimization matches rotation templates to depth templates, first sampled from projected 3D gradients and second from the 2D image gradients, so as to recover 3D rigid-body rotations and out-of-plane translation. The objective for matching was the gradient magnitude correlation coefficient, which is invariant to in-plane translation. The in-plane translations are then found by locating the maximum of the gradient phase correlation between the best matching pair of rotation and depth templates. On twenty pairs of 3D and 2D images of ten patients undergoing cerebral endovascular image-guided intervention the 3D to monoplane 2D registration experiments were setup with a rather high range of initial mean target registration error from 0 to 100 mm. The proposed method effectively reduced the registration error to below 2 mm, which was further refined by a fast iterative method and resulted in a high final registration accuracy (0.40 mm) and high success rate (> 96%). Taking into account a fast execution time below 10 s, the observed performance of the proposed method shows a high potential for application into clinical image-guidance systems.

  13. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    Science.gov (United States)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  14. 3D vision upgrade kit for TALON robot

    Science.gov (United States)

    Edmondson, Richard; Vaden, Justin; Hyatt, Brian; Morris, James; Pezzaniti, J. Larry; Chenault, David B.; Tchon, Joe; Barnidge, Tracy; Kaufman, Seth; Pettijohn, Brad

    2010-04-01

    In this paper, we report on the development of a 3D vision field upgrade kit for TALON robot consisting of a replacement flat panel stereoscopic display, and multiple stereo camera systems. An assessment of the system's use for robotic driving, manipulation, and surveillance operations was conducted. The 3D vision system was integrated onto a TALON IV Robot and Operator Control Unit (OCU) such that stock components could be electrically disconnected and removed, and upgrade components coupled directly to the mounting and electrical connections. A replacement display, replacement mast camera with zoom, auto-focus, and variable convergence, and a replacement gripper camera with fixed focus and zoom comprise the upgrade kit. The stereo mast camera allows for improved driving and situational awareness as well as scene survey. The stereo gripper camera allows for improved manipulation in typical TALON missions.

  15. Accuracy Potential and Applications of MIDAS Aerial Oblique Camera System

    Science.gov (United States)

    Madani, M.

    2012-07-01

    Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System) is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees) cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels) with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm) and (50 mm/50 mm)) were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance) for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining systematic

  16. ACCURACY POTENTIAL AND APPLICATIONS OF MIDAS AERIAL OBLIQUE CAMERA SYSTEM

    Directory of Open Access Journals (Sweden)

    M. Madani

    2012-07-01

    Full Text Available Airborne oblique cameras such as Fairchild T-3A were initially used for military reconnaissance in 30s. A modern professional digital oblique camera such as MIDAS (Multi-camera Integrated Digital Acquisition System is used to generate lifelike three dimensional to the users for visualizations, GIS applications, architectural modeling, city modeling, games, simulators, etc. Oblique imagery provide the best vantage for accessing and reviewing changes to the local government tax base, property valuation assessment, buying & selling of residential/commercial for better decisions in a more timely manner. Oblique imagery is also used for infrastructure monitoring making sure safe operations of transportation, utilities, and facilities. Sanborn Mapping Company acquired one MIDAS from TrackAir in 2011. This system consists of four tilted (45 degrees cameras and one vertical camera connected to a dedicated data acquisition computer system. The 5 digital cameras are based on the Canon EOS 1DS Mark3 with Zeiss lenses. The CCD size is 5,616 by 3,744 (21 MPixels with the pixel size of 6.4 microns. Multiple flights using different camera configurations (nadir/oblique (28 mm/50 mm and (50 mm/50 mm were flown over downtown Colorado Springs, Colorado. Boresight fights for 28 mm nadir camera were flown at 600 m and 1,200 m and for 50 mm nadir camera at 750 m and 1500 m. Cameras were calibrated by using a 3D cage and multiple convergent images utilizing Australis model. In this paper, the MIDAS system is described, a number of real data sets collected during the aforementioned flights are presented together with their associated flight configurations, data processing workflow, system calibration and quality control workflows are highlighted and the achievable accuracy is presented in some detail. This study revealed that the expected accuracy of about 1 to 1.5 GSD (Ground Sample Distance for planimetry and about 2 to 2.5 GSD for vertical can be achieved. Remaining

  17. BrachyView: proof-of-principle of a novel in-body gamma camera for low dose-rate prostate brachytherapy.

    Science.gov (United States)

    Petasecca, M; Loo, K J; Safavi-Naeini, M; Han, Z; Metcalfe, P E; Meikle, S; Pospisil, S; Jakubek, J; Bucci, J A; Zaider, M; Lerch, M L F; Qi, Y; Rosenfeld, A B

    2013-04-01

    The conformity of the achieved dose distribution to the treatment plan strongly correlates with the accuracy of seed implantation in a prostate brachytherapy treatment procedure. Incorrect seed placement leads to both short and long term complications, including urethral and rectal toxicity. The authors present BrachyView, a novel concept of a fast intraoperative treatment planning system, to provide real-time seed placement information based on in-body gamma camera data. BrachyView combines the high spatial resolution of a pixellated silicon detector (Medipix2) with the volumetric information acquired by a transrectal ultrasound (TRUS). The two systems will be embedded in the same probe so as to provide anatomically correct seed positions for intraoperative planning and postimplant dosimetry. Dosimetric calculations are based on the TG-43 method using the real position of the seeds. The purpose of this paper is to demonstrate the feasibility of BrachyView using the Medipix2 pixel detector and a pinhole collimator to reconstruct the real-time 3D position of low dose-rate brachytherapy seeds in a phantom. BrachyView incorporates three Medipix2 detectors coupled to a multipinhole collimator. Three-dimensionally triangulated seed positions from multiple planar images are used to determine the seed placement in a PMMA prostate phantom in real time. MATLAB codes were used to test the reconstruction method and to optimize the device geometry. The results presented in this paper show a 3D position reconstruction accuracy of the seed in the range of 0.5-3 mm for a 10-60 mm seed-to-detector distance interval (Z direction), respectively. The BrachyView system also demonstrates a spatial resolution of 0.25 mm in the XY plane for sources at 10 mm distance from Medipix2 detector plane, comparable to the theoretical value calculated for an equivalent gamma camera arrangement. The authors successfully demonstrated the capability of BrachyView for real-time imaging (using a 3 s

  18. BrachyView: Proof-of-principle of a novel in-body gamma camera for low dose-rate prostate brachytherapy

    International Nuclear Information System (INIS)

    Petasecca, M.; Loo, K. J.; Safavi-Naeini, M.; Han, Z.; Metcalfe, P. E.; Lerch, M. L. F.; Qi, Y.; Rosenfeld, A. B.; Meikle, S.; Pospisil, S.; Jakubek, J.; Bucci, J. A.; Zaider, M.

    2013-01-01

    Purpose: The conformity of the achieved dose distribution to the treatment plan strongly correlates with the accuracy of seed implantation in a prostate brachytherapy treatment procedure. Incorrect seed placement leads to both short and long term complications, including urethral and rectal toxicity. The authors present BrachyView, a novel concept of a fast intraoperative treatment planning system, to provide real-time seed placement information based on in-body gamma camera data. BrachyView combines the high spatial resolution of a pixellated silicon detector (Medipix2) with the volumetric information acquired by a transrectal ultrasound (TRUS). The two systems will be embedded in the same probe so as to provide anatomically correct seed positions for intraoperative planning and postimplant dosimetry. Dosimetric calculations are based on the TG-43 method using the real position of the seeds. The purpose of this paper is to demonstrate the feasibility of BrachyView using the Medipix2 pixel detector and a pinhole collimator to reconstruct the real-time 3D position of low dose-rate brachytherapy seeds in a phantom. Methods: BrachyView incorporates three Medipix2 detectors coupled to a multipinhole collimator. Three-dimensionally triangulated seed positions from multiple planar images are used to determine the seed placement in a PMMA prostate phantom in real time. MATLAB codes were used to test the reconstruction method and to optimize the device geometry. Results: The results presented in this paper show a 3D position reconstruction accuracy of the seed in the range of 0.5–3 mm for a 10–60 mm seed-to-detector distance interval (Z direction), respectively. The BrachyView system also demonstrates a spatial resolution of 0.25 mm in the XY plane for sources at 10 mm distance from Medipix2 detector plane, comparable to the theoretical value calculated for an equivalent gamma camera arrangement. The authors successfully demonstrated the capability of BrachyView for

  19. Underwater video enhancement using multi-camera super-resolution

    Science.gov (United States)

    Quevedo, E.; Delory, E.; Callicó, G. M.; Tobajas, F.; Sarmiento, R.

    2017-12-01

    Image spatial resolution is critical in several fields such as medicine, communications or satellite, and underwater applications. While a large variety of techniques for image restoration and enhancement has been proposed in the literature, this paper focuses on a novel Super-Resolution fusion algorithm based on a Multi-Camera environment that permits to enhance the quality of underwater video sequences without significantly increasing computation. In order to compare the quality enhancement, two objective quality metrics have been used: PSNR (Peak Signal-to-Noise Ratio) and the SSIM (Structural SIMilarity) index. Results have shown that the proposed method enhances the objective quality of several underwater sequences, avoiding the appearance of undesirable artifacts, with respect to basic fusion Super-Resolution algorithms.

  20. Assessment of skin wound healing with a multi-aperture camera

    Science.gov (United States)

    Nabili, Marjan; Libin, Alex; Kim, Loan; Groah, Susan; Ramella-Roman, Jessica C.

    2009-02-01

    A clinical trial was conducted at the National Rehabilitation Hospital on 15 individuals to assess whether Rheparan Skin, a bio-engineered component of the extracellular matrix of the skin, is effective at promoting healing of a variety of wounds. Along with standard clinical outcome measures, a spectroscopic camera was used to assess the efficacy of Rheparan skin. Gauzes soaked with Rheparan skin were placed on volunteers wounds for 5 minutes twice weekly for four weeks. Images of the wounds were taken using a multi spectral camera and a digital camera at baseline and weekly thereafter. Spectral images collected at different wavelengths were used combined with optical skin models to quantify parameters of interest such as oxygen saturation (SO2), water content, and melanin concentration. A digital wound measurement system (VERG) was also used to measure the size of the wound. 9 of the 15 measured subjects showed a definitive improvement post treatment in the form of a decrease in wound area. 7 of these 9 individuals also showed an increase in oxygen saturation in the ulcerated area during the trial. A similar trend was seen in other metrics. Spectral imaging of skin wound can be a valuable tool to establish wound-healing trends and to clarify healing mechanisms.

  1. Automatic Texture Optimization for 3D Urban Reconstruction

    Directory of Open Access Journals (Sweden)

    LI Ming

    2017-03-01

    Full Text Available In order to solve the problem of texture optimization in 3D city reconstruction by using multi-lens oblique images, the paper presents a method of seamless texture model reconstruction. At first, it corrects the radiation information of images by camera response functions and image dark channel. Then, according to the corresponding relevance between terrain triangular mesh surface model to image, implements occlusion detection by sparse triangulation method, and establishes the triangles' texture list of visible. Finally, combines with triangles' topology relationship in 3D triangular mesh surface model and means and variances of image, constructs a graph-cuts-based texture optimization algorithm under the framework of MRF(Markov random filed, to solve the discrete label problem of texture optimization selection and clustering, ensures the consistency of the adjacent triangles in texture mapping, achieves the seamless texture reconstruction of city. The experimental results verify the validity and superiority of our proposed method.

  2. Contextual Multi-Scale Region Convolutional 3D Network for Activity Detection

    KAUST Repository

    Bai, Yancheng

    2018-01-28

    Activity detection is a fundamental problem in computer vision. Detecting activities of different temporal scales is particularly challenging. In this paper, we propose the contextual multi-scale region convolutional 3D network (CMS-RC3D) for activity detection. To deal with the inherent temporal scale variability of activity instances, the temporal feature pyramid is used to represent activities of different temporal scales. On each level of the temporal feature pyramid, an activity proposal detector and an activity classifier are learned to detect activities of specific temporal scales. Temporal contextual information is fused into activity classifiers for better recognition. More importantly, the entire model at all levels can be trained end-to-end. Our CMS-RC3D detector can deal with activities at all temporal scale ranges with only a single pass through the backbone network. We test our detector on two public activity detection benchmarks, THUMOS14 and ActivityNet. Extensive experiments show that the proposed CMS-RC3D detector outperforms state-of-the-art methods on THUMOS14 by a substantial margin and achieves comparable results on ActivityNet despite using a shallow feature extractor.

  3. Contextual Multi-Scale Region Convolutional 3D Network for Activity Detection

    KAUST Repository

    Bai, Yancheng; Xu, Huijuan; Saenko, Kate; Ghanem, Bernard

    2018-01-01

    Activity detection is a fundamental problem in computer vision. Detecting activities of different temporal scales is particularly challenging. In this paper, we propose the contextual multi-scale region convolutional 3D network (CMS-RC3D) for activity detection. To deal with the inherent temporal scale variability of activity instances, the temporal feature pyramid is used to represent activities of different temporal scales. On each level of the temporal feature pyramid, an activity proposal detector and an activity classifier are learned to detect activities of specific temporal scales. Temporal contextual information is fused into activity classifiers for better recognition. More importantly, the entire model at all levels can be trained end-to-end. Our CMS-RC3D detector can deal with activities at all temporal scale ranges with only a single pass through the backbone network. We test our detector on two public activity detection benchmarks, THUMOS14 and ActivityNet. Extensive experiments show that the proposed CMS-RC3D detector outperforms state-of-the-art methods on THUMOS14 by a substantial margin and achieves comparable results on ActivityNet despite using a shallow feature extractor.

  4. MAL Daylight Photodynamic Therapy for Actinic Keratosis: Clinical and Imaging Evaluation by 3D Camera.

    Science.gov (United States)

    Cantisani, Carmen; Paolino, Giovanni; Pellacani, Giovanni; Didona, Dario; Scarno, Marco; Faina, Valentina; Gobello, Tommaso; Calvieri, Stefano

    2016-07-11

    Non-melanoma skin cancer is the most common skin cancer with an incidence that varies widely worldwide. Among them, actinic keratosis (AK), considered by some authors as in situ squamous cell carcinoma (SCC), are the most common and reflect an abnormal multistep skin cell development due to the chronic ultraviolet (UV) light exposure. No ideal treatment exists, but the potential risk of their development in a more invasive form requires prompt treatment. As patients usually present with multiple AK on fields of actinic damage, there is a need for effective, safe, simple and short treatments which allow the treatment of large areas. To achieve this, daylight photodynamic therapy (DL-PDT) is an innovative treatment for multiple mild actinic keratosis, well tolerated by patients. Patients allocated to the PDT unit, affected by multiple mild-moderate and severe actinic keratosis on sun-exposed areas treated with DL-PDT, were clinically evaluated at baseline and every three months with an Antera 3D, Miravex(©) camera. Clinical and 3D images were performed at each clinical check almost every three months. In this retrospective study, 331 patients (56.7% male, 43.3% female) were treated with DL-PDT. We observed a full clearance in more than two-thirds of patients with one or two treatments. Different responses depend on the number of lesions and on their severity; for patients with 1-3 lesions and with grade I or II AK, a full clearance was reached in 85% of cases with a maximum of two treatments. DL-PDT in general improved skin tone and erased sun damage. Evaluating each Antera 3D images, hemoglobin concentration and pigmentation, a skin color and tone improvement in 310 patients was observed. DL-PDT appears as a promising, effective, simple, tolerable and practical treatment for actinic damage associated with AK, and even treatment of large areas can be with little or no pain. The 3D imaging allowed for quantifying in real time the aesthetic benefits of DL

  5. MAL Daylight Photodynamic Therapy for Actinic Keratosis: Clinical and Imaging Evaluation by 3D Camera

    Directory of Open Access Journals (Sweden)

    Carmen Cantisani

    2016-07-01

    Full Text Available Non-melanoma skin cancer is the most common skin cancer with an incidence that varies widely worldwide. Among them, actinic keratosis (AK, considered by some authors as in situ squamous cell carcinoma (SCC, are the most common and reflect an abnormal multistep skin cell development due to the chronic ultraviolet (UV light exposure. No ideal treatment exists, but the potential risk of their development in a more invasive form requires prompt treatment. As patients usually present with multiple AK on fields of actinic damage, there is a need for effective, safe, simple and short treatments which allow the treatment of large areas. To achieve this, daylight photodynamic therapy (DL-PDT is an innovative treatment for multiple mild actinic keratosis, well tolerated by patients. Patients allocated to the PDT unit, affected by multiple mild−moderate and severe actinic keratosis on sun-exposed areas treated with DL-PDT, were clinically evaluated at baseline and every three months with an Antera 3D, Miravex© camera. Clinical and 3D images were performed at each clinical check almost every three months. In this retrospective study, 331 patients (56.7% male, 43.3% female were treated with DL-PDT. We observed a full clearance in more than two-thirds of patients with one or two treatments. Different responses depend on the number of lesions and on their severity; for patients with 1–3 lesions and with grade I or II AK, a full clearance was reached in 85% of cases with a maximum of two treatments. DL-PDT in general improved skin tone and erased sun damage. Evaluating each Antera 3D images, hemoglobin concentration and pigmentation, a skin color and tone improvement in 310 patients was observed. DL-PDT appears as a promising, effective, simple, tolerable and practical treatment for actinic damage associated with AK, and even treatment of large areas can be with little or no pain. The 3D imaging allowed for quantifying in real time the aesthetic benefits

  6. Soft tissue navigation for laparoscopic prostatectomy: evaluation of camera pose estimation for enhanced visualization

    Science.gov (United States)

    Baumhauer, M.; Simpfendörfer, T.; Schwarz, R.; Seitel, M.; Müller-Stich, B. P.; Gutt, C. N.; Rassweiler, J.; Meinzer, H.-P.; Wolf, I.

    2007-03-01

    We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative planning step in which the navigation targets are defined, the procedure consists of two main steps which are carried out during the intervention: First, the preoperatively prepared planning data is registered with an intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical structures can be superimposed on the video image. This paper focuses on the latter step. We have implemented several promising real-time algorithms and incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment has been developed, which allows for the simulation of navigation targets and navigation aids, including their measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.

  7. 3D Terahertz Beam Profiling

    DEFF Research Database (Denmark)

    Pedersen, Pernille Klarskov; Strikwerda, Andrew; Jepsen, Peter Uhd

    2013-01-01

    We present a characterization of THz beams generated in both a two-color air plasma and in a LiNbO3 crystal. Using a commercial THz camera, we record intensity images as a function of distance through the beam waist, from which we extract 2D beam profiles and visualize our measurements into 3D beam...

  8. Reasoning about real-time systems with temporal interval logic constraints on multi-state automata

    Science.gov (United States)

    Gabrielian, Armen

    1991-01-01

    Models of real-time systems using a single paradigm often turn out to be inadequate, whether the paradigm is based on states, rules, event sequences, or logic. A model-based approach to reasoning about real-time systems is presented in which a temporal interval logic called TIL is employed to define constraints on a new type of high level automata. The combination, called hierarchical multi-state (HMS) machines, can be used to model formally a real-time system, a dynamic set of requirements, the environment, heuristic knowledge about planning-related problem solving, and the computational states of the reasoning mechanism. In this framework, mathematical techniques were developed for: (1) proving the correctness of a representation; (2) planning of concurrent tasks to achieve goals; and (3) scheduling of plans to satisfy complex temporal constraints. HMS machines allow reasoning about a real-time system from a model of how truth arises instead of merely depending of what is true in a system.

  9. Hardware accelerator design for tracking in smart camera

    Science.gov (United States)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  10. Multi-Spacecraft 3D differential emission measure tomography of the solar corona: STEREO results.

    Science.gov (United States)

    Vásquez, A. M.; Frazin, R. A.

    We have recently developed a novel technique (called DEMT) for the em- pirical determination of the three-dimensional (3D) distribution of the so- lar corona differential emission measure through multi-spacecraft solar ro- tational tomography of extreme-ultaviolet (EUV) image time series (like those provided by EIT/SOHO and EUVI/STEREO). The technique allows, for the first time, to develop global 3D empirical maps of the coronal elec- tron temperature and density, in the height range 1.0 to 1.25 RS . DEMT constitutes a simple and powerful 3D analysis tool that obviates the need for structure specific modeling.

  11. Novel System for Real-Time Integration of 3-D Echocardiography and Fluoroscopy for Image-Guided Cardiac Interventions: Preclinical Validation and Clinical Feasibility Evaluation

    Science.gov (United States)

    Housden, R. James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C. Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D.; Razavi, Reza; Rhode, Kawal S.

    2014-01-01

    Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures. PMID:27170872

  12. Real-time viability and apoptosis kinetic detection method of 3D multicellular tumor spheroids using the Celigo Image Cytometer.

    Science.gov (United States)

    Kessel, Sarah; Cribbes, Scott; Bonasu, Surekha; Rice, William; Qiu, Jean; Chan, Leo Li-Ying

    2017-09-01

    The development of three-dimensional (3D) multicellular tumor spheroid models for cancer drug discovery research has increased in the recent years. The use of 3D tumor spheroid models may be more representative of the complex in vivo tumor microenvironments in comparison to two-dimensional (2D) assays. Currently, viability of 3D multicellular tumor spheroids has been commonly measured on standard plate-readers using metabolic reagents such as CellTiter-Glo® for end point analysis. Alternatively, high content image cytometers have been used to measure drug effects on spheroid size and viability. Previously, we have demonstrated a novel end point drug screening method for 3D multicellular tumor spheroids using the Celigo Image Cytometer. To better characterize the cancer drug effects, it is important to also measure the kinetic cytotoxic and apoptotic effects on 3D multicellular tumor spheroids. In this work, we demonstrate the use of PI and caspase 3/7 stains to measure viability and apoptosis for 3D multicellular tumor spheroids in real-time. The method was first validated by staining different types of tumor spheroids with PI and caspase 3/7 and monitoring the fluorescent intensities for 16 and 21 days. Next, PI-stained and nonstained control tumor spheroids were digested into single cell suspension to directly measure viability in a 2D assay to determine the potential toxicity of PI. Finally, extensive data analysis was performed on correlating the time-dependent PI and caspase 3/7 fluorescent intensities to the spheroid size and necrotic core formation to determine an optimal starting time point for cancer drug testing. The ability to measure real-time viability and apoptosis is highly important for developing a proper 3D model for screening tumor spheroids, which can allow researchers to determine time-dependent drug effects that usually are not captured by end point assays. This would improve the current tumor spheroid analysis method to potentially better

  13. AUTOMATIC 3D MAPPING USING MULTIPLE UNCALIBRATED CLOSE RANGE IMAGES

    Directory of Open Access Journals (Sweden)

    M. Rafiei

    2013-09-01

    Full Text Available Automatic three-dimensions modeling of the real world is an important research topic in the geomatics and computer vision fields for many years. By development of commercial digital cameras and modern image processing techniques, close range photogrammetry is vastly utilized in many fields such as structure measurements, topographic surveying, architectural and archeological surveying, etc. A non-contact photogrammetry provides methods to determine 3D locations of objects from two-dimensional (2D images. Problem of estimating the locations of 3D points from multiple images, often involves simultaneously estimating both 3D geometry (structure and camera pose (motion, it is commonly known as structure from motion (SfM. In this research a step by step approach to generate the 3D point cloud of a scene is considered. After taking images with a camera, we should detect corresponding points in each two views. Here an efficient SIFT method is used for image matching for large baselines. After that, we must retrieve the camera motion and 3D position of the matched feature points up to a projective transformation (projective reconstruction. Lacking additional information on the camera or the scene makes the parallel lines to be unparalleled. The results of SfM computation are much more useful if a metric reconstruction is obtained. Therefor multiple views Euclidean reconstruction applied and discussed. To refine and achieve the precise 3D points we use more general and useful approach, namely bundle adjustment. At the end two real cases have been considered to reconstruct (an excavation and a tower.

  14. Weight prediction of broiler chickens using 3D computer vision

    DEFF Research Database (Denmark)

    Mortensen, Anders Krogh; Lisouski, Pavel; Ahrendt, Peter

    2016-01-01

    a platform weigher which may also include ill birds. In the current study, a fully-automatic 3D camera-based weighing system for broilers have been developed and evaluated in a commercial production environment. Specifically, a low-cost 3D camera (Kinect) that directly returned a depth image was employed...

  15. Pulsed cavitational ultrasound for non-invasive chordal cutting guided by real-time 3D echocardiography.

    Science.gov (United States)

    Villemain, Olivier; Kwiecinski, Wojciech; Bel, Alain; Robin, Justine; Bruneval, Patrick; Arnal, Bastien; Tanter, Mickael; Pernot, Mathieu; Messas, Emmanuel

    2016-10-01

    Basal chordae surgical section has been shown to be effective in reducing ischaemic mitral regurgitation (IMR). Achieving this section by non-invasive mean can considerably decrease the morbidity of this intervention on already infarcted myocardium. We investigated in vitro and in vivo the feasibility and safety of pulsed cavitational focused ultrasound (histotripsy) for non-invasive chordal cutting guided by real-time 3D echocardiography. Experiments were performed on 12 sheep hearts, 5 in vitro on explanted sheep hearts and 7 in vivo on beating sheep hearts. In vitro, the mitral valve (MV) apparatus including basal and marginal chordae was removed and fixed on a holder in a water tank. High-intensity ultrasound pulses were emitted from the therapeutic device (1-MHz focused transducer, pulses of 8 µs duration, peak negative pressure of 17 MPa, repetition frequency of 100 Hz), placed at a distance of 64 mm under 3D echocardiography guidance. In vivo, after sternotomy, the same therapeutic device was applied on the beating heart. We analysed MV coaptation and chordae by real-time 3D echocardiography before and after basal chordal cutting. After sacrifice, the MV apparatus were harvested for anatomical and histological post-mortem explorations to confirm the section of the chordae. In vitro, all chordae were completely cut after a mean procedure duration of 5.5 ± 2.5 min. The procedure duration was found to increase linearly with the chordae diameter. In vivo, the central basal chordae of the anterior leaflet were completely cut. The mean procedure duration was 20 ± 9 min (min = 14, max = 26). The sectioned chordae was visible on echocardiography, and MV coaptation remained normal with no significant mitral regurgitation. Anatomical and histological post-mortem explorations of the hearts confirmed the section of the chordae. Histotripsy guided by 3D echo achieved successfully to cut MV chordae in vitro and in vivo in beating heart. We hope that this technique will

  16. Use of real-time three-dimensional transesophageal echocardiography in type A aortic dissections: Advantages of 3D TEE illustrated in three cases

    Directory of Open Access Journals (Sweden)

    Cindy J Wang

    2015-01-01

    Full Text Available Stanford type A aortic dissections often present to the hospital requiring emergent surgical intervention. Initial diagnosis is usually made by computed tomography; however transesophageal echocardiography (TEE can further characterize aortic dissections with specific advantages: It may be performed on an unstable patient, it can be used intra-operatively, and it has the ability to provide continuous real-time information. Three-dimensional (3D TEE has become more accessible over recent years allowing it to serve as an additional tool in the operating room. We present a case series of three patients presenting with type A aortic dissections and the advantages of intra-operative 3D TEE to diagnose the extent of dissection in each case. Prior case reports have demonstrated the use of 3D TEE in type A aortic dissections to characterize the extent of dissection and involvement of neighboring structures. In our three cases described, 3D TEE provided additional understanding of spatial relationships between the dissection flap and neighboring structures such as the aortic valve and coronary orifices that were not fully appreciated with two-dimensional TEE, which affected surgical decisions in the operating room. This case series demonstrates the utility and benefit of real-time 3D TEE during intra-operative management of a type A aortic dissection.

  17. Complex engineering objects construction using Multi-D innovative technology

    International Nuclear Information System (INIS)

    Agafonov, Alexey

    2013-01-01

    Multi-D technology is an integrated innovative project management system for a construction of complex engineering objects based on a construction process simulation using an intellectual 3D model. Multi-D technology includes: • The unified schedule of E+P+C; • The schedule of loading of human resources, machines & mechanisms; • The budget of expenses and the income integrated with the schedule; • 3D model; • Multi-D model; • Weekly-daily tasks (with 4th level schedules); • Control system of interaction of Customer-EPC(m) company - Contractors; • Change and configuration management system

  18. 3D Reconstruction and Restoration Monitoring of Sculptural Artworks by a Multi-Sensor Framework

    Directory of Open Access Journals (Sweden)

    Sandro Barone

    2012-12-01

    Full Text Available Nowadays, optical sensors are used to digitize sculptural artworks by exploiting various contactless technologies. Cultural Heritage applications may concern 3D reconstructions of sculptural shapes distinguished by small details distributed over large surfaces. These applications require robust multi-view procedures based on aligning several high resolution 3D measurements. In this paper, the integration of a 3D structured light scanner and a stereo photogrammetric sensor is proposed with the aim of reliably reconstructing large free form artworks. The structured light scanner provides high resolution range maps captured from different views. The stereo photogrammetric sensor measures the spatial location of each view by tracking a marker frame integral to the optical scanner. This procedure allows the computation of the rotation-translation matrix to transpose the range maps from local view coordinate systems to a unique global reference system defined by the stereo photogrammetric sensor. The artwork reconstructions can be further augmented by referring metadata related to restoration processes. In this paper, a methodology has been developed to map metadata to 3D models by capturing spatial references using a passive stereo-photogrammetric sensor. The multi-sensor framework has been experienced through the 3D reconstruction of a Statue of Hope located at the English Cemetery in Florence. This sculptural artwork has been a severe test due to the non-cooperative environment and the complex shape features distributed over a large surface.

  19. Real-Time and High-Resolution 3D Face Measurement via a Smart Active Optical Sensor.

    Science.gov (United States)

    You, Yong; Shen, Yang; Zhang, Guocai; Xing, Xiuwen

    2017-03-31

    The 3D measuring range and accuracy in traditional active optical sensing, such as Fourier transform profilometry, are influenced by the zero frequency of the captured patterns. The phase-shifting technique is commonly applied to remove the zero component. However, this phase-shifting method must capture several fringe patterns with phase difference, thereby influencing the real-time performance. This study introduces a smart active optical sensor, in which a composite pattern is utilized. The composite pattern efficiently combines several phase-shifting fringes and carrier frequencies. The method can remove zero frequency by using only one pattern. Model face reconstruction and human face measurement were employed to study the validity and feasibility of this method. Results show no distinct decrease in the precision of the novel method unlike the traditional phase-shifting method. The texture mapping technique was utilized to reconstruct a nature-appearance 3D digital face.

  20. Ground-based search for the brightest transiting planets with the Multi-site All-Sky CAmeRA: MASCARA

    Science.gov (United States)

    Snellen, Ignas A. G.; Stuik, Remko; Navarro, Ramon; Bettonvil, Felix; Kenworthy, Matthew; de Mooij, Ernst; Otten, Gilles; ter Horst, Rik; le Poole, Rudolf

    2012-09-01

    The Multi-site All-sky CAmeRA MASCARA is an instrument concept consisting of several stations across the globe, with each station containing a battery of low-cost cameras to monitor the near-entire sky at each location. Once all stations have been installed, MASCARA will be able to provide a nearly 24-hr coverage of the complete dark sky, down to magnitude 8, at sub-minute cadence. Its purpose is to find the brightest transiting exoplanet systems, expected in the V=4-8 magnitude range - currently not probed by space- or ground-based surveys. The bright/nearby transiting planet systems, which MASCARA will discover, will be the key targets for detailed planet atmosphere observations. We present studies on the initial design of a MASCARA station, including the camera housing, domes, and computer equipment, and on the photometric stability of low-cost cameras showing that a precision of 0.3-1% per hour can be readily achieved. We plan to roll out the first MASCARA station before the end of 2013. A 5-station MASCARA can within two years discover up to a dozen of the brightest transiting planet systems in the sky.

  1. A cross-platform solution for light field based 3D telemedicine.

    Science.gov (United States)

    Wang, Gengkun; Xiang, Wei; Pickering, Mark

    2016-03-01

    Current telehealth services are dominated by conventional 2D video conferencing systems, which are limited in their capabilities in providing a satisfactory communication experience due to the lack of realism. The "immersiveness" provided by 3D technologies has the potential to promote telehealth services to a wider range of applications. However, conventional stereoscopic 3D technologies are deficient in many aspects, including low resolution and the requirement for complicated multi-camera setup and calibration, and special glasses. The advent of light field (LF) photography enables us to record light rays in a single shot and provide glasses-free 3D display with continuous motion parallax in a wide viewing zone, which is ideally suited for 3D telehealth applications. As far as our literature review suggests, there have been no reports of 3D telemedicine systems using LF technology. In this paper, we propose a cross-platform solution for a LF-based 3D telemedicine system. Firstly, a novel system architecture based on LF technology is established, which is able to capture the LF of a patient, and provide an immersive 3D display at the doctor site. For 3D modeling, we further propose an algorithm which is able to convert the captured LF to a 3D model with a high level of detail. For the software implementation on different platforms (i.e., desktop, web-based and mobile phone platforms), a cross-platform solution is proposed. Demo applications have been developed for 2D/3D video conferencing, 3D model display and edit, blood pressure and heart rate monitoring, and patient data viewing functions. The demo software can be extended to multi-discipline telehealth applications, such as tele-dentistry, tele-wound and tele-psychiatry. The proposed 3D telemedicine solution has the potential to revolutionize next-generation telemedicine technologies by providing a high quality immersive tele-consultation experience. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Touring Mars Online, Real-time, in 3D for Math and Science Educators and Students

    Science.gov (United States)

    Jones, Greg; Kalinowski, Kevin

    2007-01-01

    This article discusses a project that placed over 97% of Mars' topography made available from NASA into an interactive 3D multi-user online learning environment beginning in 2003. In 2005 curriculum materials that were created to support middle school math and science education were developed. Research conducted at the University of North Texas…

  3. Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy

    Science.gov (United States)

    Furtado, H.; Steiner, E.; Stock, M.; Georg, D.; Birkfellner, W.

    2014-03-01

    Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190+/-35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.

  4. R3D3 in the Wild: Using A Robot for Turn Management in Multi-Party Interaction with a Virtual Human

    NARCIS (Netherlands)

    Theune, Mariet; Wiltenburg, Daan; Bode, Max; Linssen, Jeroen

    R3D3 is a combination of a virtual human with a non-speaking robot capable of head gestures and emotive gaze behaviour. We use the robot to implement various turn management functions for use in multi-party interaction with R3D3, and present the results of a field study investigating their effects

  5. Design tool for TOF and SL based 3D cameras.

    Science.gov (United States)

    Bouquet, Gregory; Thorstensen, Jostein; Bakke, Kari Anne Hestnes; Risholm, Petter

    2017-10-30

    Active illumination 3D imaging systems based on Time-of-flight (TOF) and Structured Light (SL) projection are in rapid development, and are constantly finding new areas of application. In this paper, we present a theoretical design tool that allows prediction of 3D imaging precision. Theoretical expressions are developed for both TOF and SL imaging systems. The expressions contain only physically measurable parameters and no fitting parameters. We perform 3D measurements with both TOF and SL imaging systems, showing excellent agreement between theoretical and measured distance precision. The theoretical framework can be a powerful 3D imaging design tool, as it allows for prediction of 3D measurement precision already in the design phase.

  6. Forensic 3D Scene Reconstruction

    International Nuclear Information System (INIS)

    LITTLE, CHARLES Q.; PETERS, RALPH R.; RIGDON, J. BRIAN; SMALL, DANIEL E.

    1999-01-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene

  7. Uncooled Terahertz real-time imaging 2D arrays developed at LETI: present status and perspectives

    Science.gov (United States)

    Simoens, François; Meilhan, Jérôme; Dussopt, Laurent; Nicolas, Jean-Alain; Monnier, Nicolas; Sicard, Gilles; Siligaris, Alexandre; Hiberty, Bruno

    2017-05-01

    As for other imaging sensor markets, whatever is the technology, the commercial spread of terahertz (THz) cameras has to fulfil simultaneously the criteria of high sensitivity and low cost and SWAP (size, weight and power). Monolithic silicon-based 2D sensors integrated in uncooled THz real-time cameras are good candidates to meet these requirements. Over the past decade, LETI has been studying and developing such arrays with two complimentary technological approaches, i.e. antenna-coupled silicon bolometers and CMOS Field Effect Transistors (FET), both being compatible to standard silicon microelectronics processes. LETI has leveraged its know-how in thermal infrared bolometer sensors in developing a proprietary architecture for THz sensing. High technological maturity has been achieved as illustrated by the demonstration of fast scanning of large field of view and the recent birth of a commercial camera. In the FET-based THz field, recent works have been focused on innovative CMOS read-out-integrated circuit designs. The studied architectures take advantage of the large pixel pitch to enhance the flexibility and the sensitivity: an embedded in-pixel configurable signal processing chain dramatically reduces the noise. Video sequences at 100 frames per second using our 31x31 pixels 2D Focal Plane Arrays (FPA) have been achieved. The authors describe the present status of these developments and perspectives of performance evolutions are discussed. Several experimental imaging tests are also presented in order to illustrate the capabilities of these arrays to address industrial applications such as non-destructive testing (NDT), security or quality control of food.

  8. High performance CCD camera system for digitalisation of 2D DIGE gels.

    Science.gov (United States)

    Strijkstra, Annemieke; Trautwein, Kathleen; Roesler, Stefan; Feenders, Christoph; Danzer, Daniel; Riemenschneider, Udo; Blasius, Bernd; Rabus, Ralf

    2016-07-01

    An essential step in 2D DIGE-based analysis of differential proteome profiles is the accurate and sensitive digitalisation of 2D DIGE gels. The performance progress of commercially available charge-coupled device (CCD) camera-based systems combined with light emitting diodes (LED) opens up a new possibility for this type of digitalisation. Here, we assessed the performance of a CCD camera system (Intas Advanced 2D Imager) as alternative to a traditionally employed, high-end laser scanner system (Typhoon 9400) for digitalisation of differential protein profiles from three different environmental bacteria. Overall, the performance of the CCD camera system was comparable to the laser scanner, as evident from very similar protein abundance changes (irrespective of spot position and volume), as well as from linear range and limit of detection. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Game of thrown bombs in 3D: using high speed cameras and photogrammetry techniques to reconstruct bomb trajectories at Stromboli (Italy)

    Science.gov (United States)

    Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.

    2015-12-01

    Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.

  10. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    Science.gov (United States)

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  11. 2D/3D Visual Tracker for Rover Mast

    Science.gov (United States)

    Bajracharya, Max; Madison, Richard W.; Nesnas, Issa A.; Bandari, Esfandiar; Kunz, Clayton; Deans, Matt; Bualat, Maria

    2006-01-01

    A visual-tracker computer program controls an articulated mast on a Mars rover to keep a designated feature (a target) in view while the rover drives toward the target, avoiding obstacles. Several prior visual-tracker programs have been tested on rover platforms; most require very small and well-estimated motion between consecutive image frames a requirement that is not realistic for a rover on rough terrain. The present visual-tracker program is designed to handle large image motions that lead to significant changes in feature geometry and photometry between frames. When a point is selected in one of the images acquired from stereoscopic cameras on the mast, a stereo triangulation algorithm computes a three-dimensional (3D) location for the target. As the rover moves, its body-mounted cameras feed images to a visual-odometry algorithm, which tracks two-dimensional (2D) corner features and computes their old and new 3D locations. The algorithm rejects points, the 3D motions of which are inconsistent with a rigid-world constraint, and then computes the apparent change in the rover pose (i.e., translation and rotation). The mast pan and tilt angles needed to keep the target centered in the field-of-view of the cameras (thereby minimizing the area over which the 2D-tracking algorithm must operate) are computed from the estimated change in the rover pose, the 3D position of the target feature, and a model of kinematics of the mast. If the motion between the consecutive frames is still large (i.e., 3D tracking was unsuccessful), an adaptive view-based matching technique is applied to the new image. This technique uses correlation-based template matching, in which a feature template is scaled by the ratio between the depth in the original template and the depth of pixels in the new image. This is repeated over the entire search window and the best correlation results indicate the appropriate match. The program could be a core for building application programs for systems

  12. Real-time 3D radiation risk assessment supporting simulation of work in nuclear environments

    International Nuclear Information System (INIS)

    Szoke, I; Louka, M N; Bryntesen, T R; Bratteli, J; Edvardsen, S T; RøEitrheim, K K; Bodor, K

    2014-01-01

    This paper describes the latest developments at the Institute for Energy Technology (IFE) in Norway, in the field of real-time 3D (three-dimensional) radiation risk assessment for the support of work simulation in nuclear environments. 3D computer simulation can greatly facilitate efficient work planning, briefing, and training of workers. It can also support communication within and between work teams, and with advisors, regulators, the media and public, at all the stages of a nuclear installation’s lifecycle. Furthermore, it is also a beneficial tool for reviewing current work practices in order to identify possible gaps in procedures, as well as to support the updating of international recommendations, dissemination of experience, and education of the current and future generation of workers. IFE has been involved in research and development into the application of 3D computer simulation and virtual reality (VR) technology to support work in radiological environments in the nuclear sector since the mid 1990s. During this process, two significant software tools have been developed, the VRdose system and the Halden Planner, and a number of publications have been produced to contribute to improving the safety culture in the nuclear industry. This paper describes the radiation risk assessment techniques applied in earlier versions of the VRdose system and the Halden Planner, for visualising radiation fields and calculating dose, and presents new developments towards implementing a flexible and up-to-date dosimetric package in these 3D software tools, based on new developments in the field of radiation protection. The latest versions of these 3D tools are capable of more accurate risk estimation, permit more flexibility via a range of user choices, and are applicable to a wider range of irradiation situations than their predecessors. (paper)

  13. Real-time 3D radiation risk assessment supporting simulation of work in nuclear environments.

    Science.gov (United States)

    Szőke, I; Louka, M N; Bryntesen, T R; Bratteli, J; Edvardsen, S T; RøEitrheim, K K; Bodor, K

    2014-06-01

    This paper describes the latest developments at the Institute for Energy Technology (IFE) in Norway, in the field of real-time 3D (three-dimensional) radiation risk assessment for the support of work simulation in nuclear environments. 3D computer simulation can greatly facilitate efficient work planning, briefing, and training of workers. It can also support communication within and between work teams, and with advisors, regulators, the media and public, at all the stages of a nuclear installation's lifecycle. Furthermore, it is also a beneficial tool for reviewing current work practices in order to identify possible gaps in procedures, as well as to support the updating of international recommendations, dissemination of experience, and education of the current and future generation of workers.IFE has been involved in research and development into the application of 3D computer simulation and virtual reality (VR) technology to support work in radiological environments in the nuclear sector since the mid 1990s. During this process, two significant software tools have been developed, the VRdose system and the Halden Planner, and a number of publications have been produced to contribute to improving the safety culture in the nuclear industry.This paper describes the radiation risk assessment techniques applied in earlier versions of the VRdose system and the Halden Planner, for visualising radiation fields and calculating dose, and presents new developments towards implementing a flexible and up-to-date dosimetric package in these 3D software tools, based on new developments in the field of radiation protection. The latest versions of these 3D tools are capable of more accurate risk estimation, permit more flexibility via a range of user choices, and are applicable to a wider range of irradiation situations than their predecessors.

  14. Real-time multi-peak tractography for instantaneous connectivity display

    Directory of Open Access Journals (Sweden)

    Maxime eChamberland

    2014-05-01

    Full Text Available The computerized process of reconstructing white matter tracts from diffusion MRI (dMRI data is often referred to as tractography. Tractography is nowadays central in structural connectivity since it is the only non-invasive technique to obtain information about brain wiring. Most publicly available tractography techniques and most studies are based on a fixed set of tractography parameters. However, the scale and curvature of fiber bundles can vary from region to region in the brain. Therefore, depending on the area of interest or subject (e.g. healthy control vs. tumor patient, optimal tracking parameters can be dramatically different. As a result, a slight change in tracking parameters may return different connectivity profiles and complicate the interpretation of the results. Having access to tractography parameters can thus be advantageous, as it will help in better isolating those which are sensitive to certain streamline features and potentially converge on optimal settings which are area-specific. In this work, we propose a real-time fiber tracking (RTT tool which can instantaneously compute and display streamlines. To achieve such real-time performance, we propose a novel evolution equation based on the upsampled principal directions, also called peaks, extracted at each voxel of the dMRI dataset. The technique runs on a single Computer Processing Unit (CPU without the need for Graphical Unit Processing (GPU programming. We qualitatively illustrate and quantitatively evaluate our novel multi-peak RTT technique on phantom and human datasets in comparison with the state of the art offline tractography from MRtrix, which is robust to fiber crossings. Finally, we show how our RTT tool facilitates neurosurgical planning and allows one to find fibers that infiltrate tumor areas, otherwise missing when using the standard default tracking parameters.

  15. View-based 3-D object retrieval

    CERN Document Server

    Gao, Yue

    2014-01-01

    Content-based 3-D object retrieval has attracted extensive attention recently and has applications in a variety of fields, such as, computer-aided design, tele-medicine,mobile multimedia, virtual reality, and entertainment. The development of efficient and effective content-based 3-D object retrieval techniques has enabled the use of fast 3-D reconstruction and model design. Recent technical progress, such as the development of camera technologies, has made it possible to capture the views of 3-D objects. As a result, view-based 3-D object retrieval has become an essential but challenging res

  16. Real-time markerless tracking for augmented reality: the virtual visual servoing framework.

    Science.gov (United States)

    Comport, Andrew I; Marchand, Eric; Pressigout, Muriel; Chaumette, François

    2006-01-01

    Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a "video see through" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. Here, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.

  17. Model-based framework for multi-axial real-time hybrid simulation testing

    Science.gov (United States)

    Fermandois, Gaston A.; Spencer, Billie F.

    2017-10-01

    Real-time hybrid simulation is an efficient and cost-effective dynamic testing technique for performance evaluation of structural systems subjected to earthquake loading with rate-dependent behavior. A loading assembly with multiple actuators is required to impose realistic boundary conditions on physical specimens. However, such a testing system is expected to exhibit significant dynamic coupling of the actuators and suffer from time lags that are associated with the dynamics of the servo-hydraulic system, as well as control-structure interaction (CSI). One approach to reducing experimental errors considers a multi-input, multi-output (MIMO) controller design, yielding accurate reference tracking and noise rejection. In this paper, a framework for multi-axial real-time hybrid simulation (maRTHS) testing is presented. The methodology employs a real-time feedback-feedforward controller for multiple actuators commanded in Cartesian coordinates. Kinematic transformations between actuator space and Cartesian space are derived for all six-degrees-offreedom of the moving platform. Then, a frequency domain identification technique is used to develop an accurate MIMO transfer function of the system. Further, a Cartesian-domain model-based feedforward-feedback controller is implemented for time lag compensation and to increase the robustness of the reference tracking for given model uncertainty. The framework is implemented using the 1/5th-scale Load and Boundary Condition Box (LBCB) located at the University of Illinois at Urbana- Champaign. To demonstrate the efficacy of the proposed methodology, a single-story frame subjected to earthquake loading is tested. One of the columns in the frame is represented physically in the laboratory as a cantilevered steel column. For realtime execution, the numerical substructure, kinematic transformations, and controllers are implemented on a digital signal processor. Results show excellent performance of the maRTHS framework when six

  18. A digital 3D atlas of the marmoset brain based on multi-modal MRI.

    Science.gov (United States)

    Liu, Cirong; Ye, Frank Q; Yen, Cecil Chern-Chyi; Newman, John D; Glen, Daniel; Leopold, David A; Silva, Afonso C

    2018-04-01

    The common marmoset (Callithrix jacchus) is a New-World monkey of growing interest in neuroscience. Magnetic resonance imaging (MRI) is an essential tool to unveil the anatomical and functional organization of the marmoset brain. To facilitate identification of regions of interest, it is desirable to register MR images to an atlas of the brain. However, currently available atlases of the marmoset brain are mainly based on 2D histological data, which are difficult to apply to 3D imaging techniques. Here, we constructed a 3D digital atlas based on high-resolution ex-vivo MRI images, including magnetization transfer ratio (a T1-like contrast), T2w images, and multi-shell diffusion MRI. Based on the multi-modal MRI images, we manually delineated 54 cortical areas and 16 subcortical regions on one hemisphere of the brain (the core version). The 54 cortical areas were merged into 13 larger cortical regions according to their locations to yield a coarse version of the atlas, and also parcellated into 106 sub-regions using a connectivity-based parcellation method to produce a refined atlas. Finally, we compared the new atlas set with existing histology atlases and demonstrated its applications in connectome studies, and in resting state and stimulus-based fMRI. The atlas set has been integrated into the widely-distributed neuroimaging data analysis software AFNI and SUMA, providing a readily usable multi-modal template space with multi-level anatomical labels (including labels from the Paxinos atlas) that can facilitate various neuroimaging studies of marmosets. Published by Elsevier Inc.

  19. 3D Point Cloud Reconstruction from Single Plenoptic Image

    Directory of Open Access Journals (Sweden)

    F. Murgia

    2016-06-01

    Full Text Available Novel plenoptic cameras sample the light field crossing the main camera lens. The information available in a plenoptic image must be processed, in order to create the depth map of the scene from a single camera shot. In this paper a novel algorithm, for the reconstruction of 3D point cloud of the scene from a single plenoptic image, taken with a consumer plenoptic camera, is proposed. Experimental analysis is conducted on several test images, and results are compared with state of the art methodologies. The results are very promising, as the quality of the 3D point cloud from plenoptic image, is comparable with the quality obtained with current non-plenoptic methodologies, that necessitate more than one image.

  20. a Circleless "2D/3D Total STATION": a Low Cost Instrument for Surveying, Recording Point Clouds, Documentation, Image Acquisition and Visualisation

    Science.gov (United States)

    Scherer, M.

    2013-07-01

    Hardware and software of the universally applicable instrument - referred to as a 2D/3D total station - are described here, as well as its practical use. At its core it consists of a 3D camera - often also called a ToF camera, a pmd camera or a RIM-camera - combined with a common industrial 2D camera. The cameras are rigidly coupled with their optical axes in parallel. A new type of instrument was created mounting this 2D/3D system on a tripod in a specific way. Because of it sharing certain characteristics with a total station and a tacheometer, respectively, the new device was called a 2D/3D total station. It may effectively replace a common total station or a laser scanner in some respects. After a brief overview of the prototype's features this paper then focuses on the methodological characteristics for practical application. Its usability as a universally applicable stand-alone instrument is demonstrated for surveying, recording RGB-coloured point clouds as well as delivering images for documentation and visualisation. Because of its limited range (10m without reflector and 150 m to reflector prisms) and low range accuracy (ca. 2 cm to 3 cm) compared to present-day total stations and laser scanners, the practical usage of the 2D/3D total station is currently limited to acquisition of accidents, forensic purpuses, speleology or facility management, as well as architectural recordings with low requirements regarding accuracy. However, the author is convinced that in the near future advancements in 3D camera technology will allow this type of comparatively low cost instrument to replace the total station as well as the laser scanner in an increasing number of areas.

  1. 3D Display of Spacecraft Dynamics Using Real Telemetry

    Directory of Open Access Journals (Sweden)

    Sanguk Lee

    2002-12-01

    Full Text Available 3D display of spacecraft motion by using telemetry data received from satellite in real-time is described. Telemetry data are converted to the appropriate form for 3-D display by the real-time preprocessor. Stored playback telemetry data also can be processed for the display. 3D display of spacecraft motion by using real telemetry data provides intuitive comprehension of spacecraft dynamics.

  2. Rail-Guided Multi-Robot System for 3D Cellular Hydrogel Assembly with Coordinated Nanomanipulation

    Directory of Open Access Journals (Sweden)

    Huaping Wang

    2014-08-01

    Full Text Available The 3D assembly of micro-/nano-building blocks with multi-nanomanipulator coordinated manipulation is one of the central elements of nanomanipulation. A novel rail-guided nanomanipulation system was proposed for the assembly of a cellular vascular-like hydrogel microchannel. The system was equipped with three nanomanipulators and was restricted on the rail in order to realize the arbitrary change of the end-effectors during the assembly. It was set up with hybrid motors to achieve both a large operating space and a 30 nm positional resolution. The 2D components such as the assembly units were fabricated through the encapsulation of cells in the hydrogel. The coordinated manipulation strategies among the multi-nanomanipulators were designed with vision feedback and were demonstrated through the bottom-up assembly of the vascular-like microtube. As a result, the multi-layered microchannel was assembled through the cooperation of the nanomanipulation system.

  3. High accuracy 3-D laser radar

    DEFF Research Database (Denmark)

    Busck, Jens; Heiselberg, Henning

    2004-01-01

    We have developed a mono-static staring 3-D laser radar based on gated viewing with range accuracy below 1 m at 10 m and 1 cm at 100. We use a high sensitivity, fast, intensified CCD camera, and a Nd:Yag passively Q-switched 32.4 kHz pulsed green laser at 532 nm. The CCD has 752x582 pixels. Camera...

  4. 3D FaceCam: a fast and accurate 3D facial imaging device for biometrics applications

    Science.gov (United States)

    Geng, Jason; Zhuang, Ping; May, Patrick; Yi, Steven; Tunnell, David

    2004-08-01

    Human faces are fundamentally three-dimensional (3D) objects, and each face has its unique 3D geometric profile. The 3D geometric features of a human face can be used, together with its 2D texture, for rapid and accurate face recognition purposes. Due to the lack of low-cost and robust 3D sensors and effective 3D facial recognition (FR) algorithms, almost all existing FR systems use 2D face images. Genex has developed 3D solutions that overcome the inherent problems in 2D while also addressing limitations in other 3D alternatives. One important aspect of our solution is a unique 3D camera (the 3D FaceCam) that combines multiple imaging sensors within a single compact device to provide instantaneous, ear-to-ear coverage of a human face. This 3D camera uses three high-resolution CCD sensors and a color encoded pattern projection system. The RGB color information from each pixel is used to compute the range data and generate an accurate 3D surface map. The imaging system uses no moving parts and combines multiple 3D views to provide detailed and complete 3D coverage of the entire face. Images are captured within a fraction of a second and full-frame 3D data is produced within a few seconds. This described method provides much better data coverage and accuracy in feature areas with sharp features or details (such as the nose and eyes). Using this 3D data, we have been able to demonstrate that a 3D approach can significantly improve the performance of facial recognition. We have conducted tests in which we have varied the lighting conditions and angle of image acquisition in the "field." These tests have shown that the matching results are significantly improved when enrolling a 3D image rather than a single 2D image. With its 3D solutions, Genex is working toward unlocking the promise of powerful 3D FR and transferring FR from a lab technology into a real-world biometric solution.

  5. Power estimation of martial arts movement using 3D motion capture camera

    Science.gov (United States)

    Azraai, Nur Zaidi; Awang Soh, Ahmad Afiq Sabqi; Mat Jafri, Mohd Zubir

    2017-06-01

    Motion capture camera (MOCAP) has been widely used in many areas such as biomechanics, physiology, animation, arts, etc. This project is done by approaching physics mechanics and the extended of MOCAP application through sports. Most researchers will use a force plate, but this will only can measure the force of impact, but for us, we are keen to observe the kinematics of the movement. Martial arts is one of the sports that uses more than one part of the human body. For this project, martial art `Silat' was chosen because of its wide practice in Malaysia. 2 performers have been selected, one of them has an experienced in `Silat' practice and another one have no experience at all so that we can compare the energy and force generated by the performers. Every performer will generate a punching with same posture which in this project, two types of punching move were selected. Before the measuring start, a calibration has been done so the software knows the area covered by the camera and reduce the error when analyze by using the T stick that have been pasted with a marker. A punching bag with mass 60 kg was hung on an iron bar as a target. The use of this punching bag is to determine the impact force of a performer when they punch. This punching bag also will be stuck with the optical marker so we can observe the movement after impact. 8 cameras have been used and placed with 2 cameras at every side of the wall with different angle in a rectangular room 270 ft2 and the camera covered approximately 50 ft2. We covered only a small area so less noise will be detected and make the measurement more accurate. A Marker has been pasted on the limb of the entire hand that we want to observe and measure. A passive marker used in this project has a characteristic to reflect the infrared that being generated by the camera. The infrared will reflected to the camera sensor so the marker position can be detected and show in software. The used of many cameras is to increase the

  6. [Interest using 3D ultrasound and MRI fusion biopsy for prostate cancer detection].

    Science.gov (United States)

    Marien, A; De Castro Abreu, A; Gill, I; Villers, A; Ukimura, O

    2017-09-01

    The strategic therapy for prostate cancer depends on histo-pronostics data, which could be upgraded by obtaining targeted biopsies (TB) with MRI (magnetic resonance imagery) fusion 3D ultrasound. To compare diagnostic yield of image fusion guided prostate biopsy using image fusion of multi-parametric MRI (mpMRI) with 3D-TRUS. Between January 2010 and April 2013, 179 consecutive patients underwent outpatient TRUS biopsy using the real-time 3D TRUS tracking system (Urostation™). These patients underwent MRI-TRUS fusion targeted biopsies (TB) with 3D volume data of the MRI elastically fused with 3D TRUS at the time of biopsy. A hundred and seventy-three patients had TBs with fusion. Mean biopsy core per patient were 11.1 (6-14) for SB and 2.4 (1-6) for TB. SBs were positive in 11% compared to 56% for TB (Pperform the higher level of MR/US fusion and should be use for active surveillance. 4. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  7. Reducing the Variance of Intrinsic Camera Calibration Results in the ROS Camera_Calibration Package

    Science.gov (United States)

    Chiou, Geoffrey Nelson

    The intrinsic calibration of a camera is the process in which the internal optical and geometric characteristics of the camera are determined. If accurate intrinsic parameters of a camera are known, the ray in 3D space that every point in the image lies on can be determined. Pairing with another camera allows for the position of the points in the image to be calculated by intersection of the rays. Accurate intrinsics also allow for the position and orientation of a camera relative to some world coordinate system to be calculated. These two reasons for having accurate intrinsic calibration for a camera are especially important in the field of industrial robotics where 3D cameras are frequently mounted on the ends of manipulators. In the ROS (Robot Operating System) ecosystem, the camera_calibration package is the default standard for intrinsic camera calibration. Several researchers from the Industrial Robotics & Automation division at Southwest Research Institute have noted that this package results in large variances in the intrinsic parameters of the camera when calibrating across multiple attempts. There are also open issues on this matter in their public repository that have not been addressed by the developers. In this thesis, we confirm that the camera_calibration package does indeed return different results across multiple attempts, test out several possible hypothesizes as to why, identify the reason, and provide simple solution to fix the cause of the issue.

  8. Development of a real time multiple target, multi camera tracker for civil security applications

    Science.gov (United States)

    Åkerlund, Hans

    2009-09-01

    A surveillance system has been developed that can use multiple TV-cameras to detect and track personnel and objects in real time in public areas. The document describes the development and the system setup. The system is called NIVS Networked Intelligent Video Surveillance. Persons in the images are tracked and displayed on a 3D map of the surveyed area.

  9. Modal Identification in an Automotive Multi-Component System Using HS 3D-DIC

    Directory of Open Access Journals (Sweden)

    Ángel Jesús Molina-Viedma

    2018-02-01

    Full Text Available The modal characterization of automotive lighting systems becomes difficult using sensors due to the light weight of the elements which compose the component as well as the intricate access to allocate them. In experimental modal analysis, high speed 3D digital image correlation (HS 3D-DIC is attracting the attention since it provides full-field contactless measurements of 3D displacements as main advantage over other techniques. Different methodologies have been published that perform modal identification, i.e., natural frequencies, damping ratios, and mode shapes using the full-field information. In this work, experimental modal analysis has been performed in a multi-component automotive lighting system using HS 3D-DIC. Base motion excitation was applied to simulate operating conditions. A recently validated methodology has been employed for modal identification using transmissibility functions, i.e., the transfer functions from base motion tests. Results make it possible to identify local and global behavior of the different elements of injected polymeric and metallic materials.

  10. Modal Identification in an Automotive Multi-Component System Using HS 3D-DIC.

    Science.gov (United States)

    Molina-Viedma, Ángel Jesús; López-Alba, Elías; Felipe-Sesé, Luis; Díaz, Francisco A

    2018-02-05

    The modal characterization of automotive lighting systems becomes difficult using sensors due to the light weight of the elements which compose the component as well as the intricate access to allocate them. In experimental modal analysis, high speed 3D digital image correlation (HS 3D-DIC) is attracting the attention since it provides full-field contactless measurements of 3D displacements as main advantage over other techniques. Different methodologies have been published that perform modal identification, i.e., natural frequencies, damping ratios, and mode shapes using the full-field information. In this work, experimental modal analysis has been performed in a multi-component automotive lighting system using HS 3D-DIC. Base motion excitation was applied to simulate operating conditions. A recently validated methodology has been employed for modal identification using transmissibility functions, i.e., the transfer functions from base motion tests. Results make it possible to identify local and global behavior of the different elements of injected polymeric and metallic materials.

  11. Modal Identification in an Automotive Multi-Component System Using HS 3D-DIC

    Science.gov (United States)

    López-Alba, Elías; Felipe-Sesé, Luis; Díaz, Francisco A.

    2018-01-01

    The modal characterization of automotive lighting systems becomes difficult using sensors due to the light weight of the elements which compose the component as well as the intricate access to allocate them. In experimental modal analysis, high speed 3D digital image correlation (HS 3D-DIC) is attracting the attention since it provides full-field contactless measurements of 3D displacements as main advantage over other techniques. Different methodologies have been published that perform modal identification, i.e., natural frequencies, damping ratios, and mode shapes using the full-field information. In this work, experimental modal analysis has been performed in a multi-component automotive lighting system using HS 3D-DIC. Base motion excitation was applied to simulate operating conditions. A recently validated methodology has been employed for modal identification using transmissibility functions, i.e., the transfer functions from base motion tests. Results make it possible to identify local and global behavior of the different elements of injected polymeric and metallic materials. PMID:29401725

  12. Integrated multi sensors and camera video sequence application for performance monitoring in archery

    Science.gov (United States)

    Taha, Zahari; Arif Mat-Jizat, Jessnor; Amirul Abdullah, Muhammad; Muazu Musa, Rabiu; Razali Abdullah, Mohamad; Fauzi Ibrahim, Mohamad; Hanafiah Shaharudin, Mohd Ali

    2018-03-01

    This paper explains the development of a comprehensive archery performance monitoring software which consisted of three camera views and five body sensors. The five body sensors evaluate biomechanical related variables of flexor and extensor muscle activity, heart rate, postural sway and bow movement during archery performance. The three camera views with the five body sensors are integrated into a single computer application which enables the user to view all the data in a single user interface. The five body sensors’ data are displayed in a numerical and graphical form in real-time. The information transmitted by the body sensors are computed with an embedded algorithm that automatically transforms the summary of the athlete’s biomechanical performance and displays in the application interface. This performance will be later compared to the pre-computed psycho-fitness performance from the prefilled data into the application. All the data; camera views, body sensors; performance-computations; are recorded for further analysis by a sports scientist. Our developed application serves as a powerful tool for assisting the coach and athletes to observe and identify any wrong technique employ during training which gives room for correction and re-evaluation to improve overall performance in the sport of archery.

  13. Investigation of power and frequency for 3D conformal MRI-controlled transurethral ultrasound therapy with a dual frequency multi-element transducer.

    Science.gov (United States)

    N'djin, William Apoutou; Burtnyk, Mathieu; Bronskill, Michael; Chopra, Rajiv

    2012-01-01

    Transurethral ultrasound therapy uses real-time magnetic resonance (MR) temperature feedback to enable the 3D control of thermal therapy accurately in a region within the prostate. Previous canine studies showed the feasibility of this method in vivo. The aim of this study was to reduce the procedure time, while maintaining targeting accuracy, by investigating new combinations of treatment parameters. Simulations and validation experiments in gel phantoms were used, with a collection of nine 3D realistic target prostate boundaries obtained from previous preclinical studies, where multi-slice MR images were acquired with the transurethral device in place. Acoustic power and rotation rate were varied based on temperature feedback at the prostate boundary. Maximum acoustic power and rotation rate were optimised interdependently, as a function of prostate radius and transducer operating frequency. The concept of dual frequency transducers was studied, using the fundamental frequency or the third harmonic component depending on the prostate radius. Numerical modelling enabled assessment of the effects of several acoustic parameters on treatment outcomes. The range of treatable prostate radii extended with increasing power, and tended to narrow with decreasing frequency. Reducing the frequency from 8 MHz to 4 MHz or increasing the surface acoustic power from 10 to 20 W/cm(2) led to treatment times shorter by up to 50% under appropriate conditions. A dual frequency configuration of 4/12 MHz with 20 W/cm(2) ultrasound intensity exposure can treat entire prostates up to 40 cm(3) in volume within 30 min. The interdependence between power and frequency may, however, require integrating multi-parametric functions in the controller for future optimisations.

  14. Clinical usefulness of augmented reality using infrared camera based real-time feedback on gait function in cerebral palsy: a case study.

    Science.gov (United States)

    Lee, Byoung-Hee

    2016-04-01

    [Purpose] This study investigated the effects of real-time feedback using infrared camera recognition technology-based augmented reality in gait training for children with cerebral palsy. [Subjects] Two subjects with cerebral palsy were recruited. [Methods] In this study, augmented reality based real-time feedback training was conducted for the subjects in two 30-minute sessions per week for four weeks. Spatiotemporal gait parameters were used to measure the effect of augmented reality-based real-time feedback training. [Results] Velocity, cadence, bilateral step and stride length, and functional ambulation improved after the intervention in both cases. [Conclusion] Although additional follow-up studies of the augmented reality based real-time feedback training are required, the results of this study demonstrate that it improved the gait ability of two children with cerebral palsy. These findings suggest a variety of applications of conservative therapeutic methods which require future clinical trials.

  15. ARTiS, an Asymmetric Real-Time Scheduler for Linux on Multi-Processor Architectures

    OpenAIRE

    Piel , Éric; Marquet , Philippe; Soula , Julien; Osuna , Christophe; Dekeyser , Jean-Luc

    2005-01-01

    The ARTiS system is a real-time extension of the GNU/Linux scheduler dedicated to SMP (Symmetric Multi-Processors) systems. It allows to mix High Performance Computing and real-time. ARTiS exploits the SMP architecture to guarantee the preemption of a processor when the system has to schedule a real-time task. The implementation is available as a modification of the Linux kernel, especially focusing (but not restricted to) IA-64 architecture. The basic idea of ARTiS is to assign a selected se...

  16. 3D medical collaboration technology to enhance emergency healthcare

    DEFF Research Database (Denmark)

    Welch, Gregory F; Sonnenwald, Diane H.; Fuchs, Henry

    2009-01-01

    Two-dimensional (2D) videoconferencing has been explored widely in the past 15-20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address...... these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays...... or with mobile devices such as personal digital assistants (PDAs). The remote professionals' viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing...

  17. Real-time vision systems

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, R.; Hernandez, J.E.; Lu, Shin-yee [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    Many industrial and defence applications require an ability to make instantaneous decisions based on sensor input of a time varying process. Such systems are referred to as `real-time systems` because they process and act on data as it occurs in time. When a vision sensor is used in a real-time system, the processing demands can be quite substantial, with typical data rates of 10-20 million samples per second. A real-time Machine Vision Laboratory (MVL) was established in FY94 to extend our years of experience in developing computer vision algorithms to include the development and implementation of real-time vision systems. The laboratory is equipped with a variety of hardware components, including Datacube image acquisition and processing boards, a Sun workstation, and several different types of CCD cameras, including monochrome and color area cameras and analog and digital line-scan cameras. The equipment is reconfigurable for prototyping different applications. This facility has been used to support several programs at LLNL, including O Division`s Peacemaker and Deadeye Projects as well as the CRADA with the U.S. Textile Industry, CAFE (Computer Aided Fabric Inspection). To date, we have successfully demonstrated several real-time applications: bullet tracking, stereo tracking and ranging, and web inspection. This work has been documented in the ongoing development of a real-time software library.

  18. IMU and Multiple RGB-D Camera Fusion for Assisting Indoor Stop-and-Go 3D Terrestrial Laser Scanning

    Directory of Open Access Journals (Sweden)

    Jacky C.K. Chow

    2014-07-01

    Full Text Available Autonomous Simultaneous Localization and Mapping (SLAM is an important topic in many engineering fields. Since stop-and-go systems are typically slow and full-kinematic systems may lack accuracy and integrity, this paper presents a novel hybrid “continuous stop-and-go” mobile mapping system called Scannect. A 3D terrestrial LiDAR system is integrated with a MEMS IMU and two Microsoft Kinect sensors to map indoor urban environments. The Kinects’ depth maps were processed using a new point-to-plane ICP that minimizes the reprojection error of the infrared camera and projector pair in an implicit iterative extended Kalman filter (IEKF. A new formulation of the 5-point visual odometry method is tightly coupled in the implicit IEKF without increasing the dimensions of the state space. The Scannect can map and navigate in areas with textureless walls and provides an effective means for mapping large areas with lots of occlusions. Mapping long corridors (total travel distance of 120 m took approximately 30 minutes and achieved a Mean Radial Spherical Error of 17 cm before smoothing or global optimization.

  19. Development of 3D multi-group neutron diffusion code for hexagonal geometry

    International Nuclear Information System (INIS)

    Sun Wei; Wang Kan; Ni Dongyang; Li Qing

    2013-01-01

    Based on the theory of new flux expansion nodal method to solve the neutron diffusion equations, the intra-nodal fluence rate distribution was expanded in a series of analytic basic functions for each group. In order to improve the accuracy of calculation result, continuities of neutron fluence rate and current were utilized across the nodal surfaces. According to the boundary conditions, the iteration method was adopted to solve the diffusion equation, where inner iteration speedup method is Gauss-Seidel method and outer is Lyusternik-Wagner. A new speedup method (one-outer-iteration and multi-inner-iteration method) was proposed according to the characteristic that the convergence speed of multiplication factor is faster than that of neutron fluence rate and the update of inner iteration matrix is slow. Based on the proposed model, the code HANDF-D was developed and tested by 3D two-group vver440 benchmark, experiment 2 of HFETR, 3D four-group thermal reactor benchmark, and 3D seven-group fast reactor benchmark. The numerical results show that HANDF-D can predict accurately the multiplication factor and nodal powers. (authors)

  20. T-L Plane Abstraction-Based Energy-Efficient Real-Time Scheduling for Multi-Core Wireless Sensors

    Directory of Open Access Journals (Sweden)

    Youngmin Kim

    2016-07-01

    Full Text Available Energy efficiency is considered as a critical requirement for wireless sensor networks. As more wireless sensor nodes are equipped with multi-cores, there are emerging needs for energy-efficient real-time scheduling algorithms. The T-L plane-based scheme is known to be an optimal global scheduling technique for periodic real-time tasks on multi-cores. Unfortunately, there has been a scarcity of studies on extending T-L plane-based scheduling algorithms to exploit energy-saving techniques. In this paper, we propose a new T-L plane-based algorithm enabling energy-efficient real-time scheduling on multi-core sensor nodes with dynamic power management (DPM. Our approach addresses the overhead of processor mode transitions and reduces fragmentations of the idle time, which are inherent in T-L plane-based algorithms. Our experimental results show the effectiveness of the proposed algorithm compared to other energy-aware scheduling methods on T-L plane abstraction.

  1. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    Science.gov (United States)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  2. Design and Implementation of a Novel Portable 360° Stereo Camera System with Low-Cost Action Cameras

    Science.gov (United States)

    Holdener, D.; Nebiker, S.; Blaser, S.

    2017-11-01

    The demand for capturing indoor spaces is rising with the digitalization trend in the construction industry. An efficient solution for measuring challenging indoor environments is mobile mapping. Image-based systems with 360° panoramic coverage allow a rapid data acquisition and can be processed to georeferenced 3D images hosted in cloud-based 3D geoinformation services. For the multiview stereo camera system presented in this paper, a 360° coverage is achieved with a layout consisting of five horizontal stereo image pairs in a circular arrangement. The design is implemented as a low-cost solution based on a 3D printed camera rig and action cameras with fisheye lenses. The fisheye stereo system is successfully calibrated with accuracies sufficient for the applied measurement task. A comparison of 3D distances with reference data delivers maximal deviations of 3 cm on typical distances in indoor space of 2-8 m. Also the automatic computation of coloured point clouds from the stereo pairs is demonstrated.

  3. Real-time multi-task operators support system

    International Nuclear Information System (INIS)

    Wang He; Peng Minjun; Wang Hao; Cheng Shouyu

    2005-01-01

    The development in computer software and hardware technology and information processing as well as the accumulation in the design and feedback from Nuclear Power Plant (NPP) operation created a good opportunity to develop an integrated Operator Support System. The Real-time Multi-task Operator Support System (RMOSS) has been built to support the operator's decision making process during normal and abnormal operations. RMOSS consists of five system subtasks such as Data Collection and Validation Task (DCVT), Operation Monitoring Task (OMT), Fault Diagnostic Task (FDT), Operation Guideline Task (OGT) and Human Machine Interface Task (HMIT). RMOSS uses rule-based expert system and Artificial Neural Network (ANN). The rule-based expert system is used to identify the predefined events in static conditions and track the operation guideline through data processing. In dynamic status, Back-Propagation Neural Network is adopted for fault diagnosis, which is trained with the Genetic Algorithm. Embedded real-time operation system VxWorks and its integrated environment Tornado II are used as the RMOSS software cross-development. VxGUI is used to design HMI. All of the task programs are designed in C language. The task tests and function evaluation of RMOSS have been done in one real-time full scope simulator. Evaluation results show that each task of RMOSS is capable of accomplishing its functions. (authors)

  4. An automated 3D reconstruction method of UAV images

    Science.gov (United States)

    Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping

    2015-10-01

    In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.

  5. Scientific Objectives of Small Carry-on Impactor (SCI) and Deployable Camera 3 Digital (DCAM3-D): Observation of an Ejecta Curtain and a Crater Formed on the Surface of Ryugu by an Artificial High-Velocity Impact

    Science.gov (United States)

    Arakawa, M.; Wada, K.; Saiki, T.; Kadono, T.; Takagi, Y.; Shirai, K.; Okamoto, C.; Yano, H.; Hayakawa, M.; Nakazawa, S.; Hirata, N.; Kobayashi, M.; Michel, P.; Jutzi, M.; Imamura, H.; Ogawa, K.; Sakatani, N.; Iijima, Y.; Honda, R.; Ishibashi, K.; Hayakawa, H.; Sawada, H.

    2017-07-01

    The Small Carry-on Impactor (SCI) equipped on Hayabusa2 was developed to produce an artificial impact crater on the primitive Near-Earth Asteroid (NEA) 162173 Ryugu (Ryugu) in order to explore the asteroid subsurface material unaffected by space weathering and thermal alteration by solar radiation. An exposed fresh surface by the impactor and/or the ejecta deposit excavated from the crater will be observed by remote sensing instruments, and a subsurface fresh sample of the asteroid will be collected there. The SCI impact experiment will be observed by a Deployable CAMera 3-D (DCAM3-D) at a distance of ˜1 km from the impact point, and the time evolution of the ejecta curtain will be observed by this camera to confirm the impact point on the asteroid surface. As a result of the observation of the ejecta curtain by DCAM3-D and the crater morphology by onboard cameras, the subsurface structure and the physical properties of the constituting materials will be derived from crater scaling laws. Moreover, the SCI experiment on Ryugu gives us a precious opportunity to clarify effects of microgravity on the cratering process and to validate numerical simulations and models of the cratering process.

  6. Characterization of jellyfish turning using 3D-PTV

    Science.gov (United States)

    Xu, Nicole; Dabiri, John

    2017-11-01

    Aurelia aurita are oblate, radially symmetric jellyfish that consist of a gelatinous bell and subumbrellar muscle ring, which contracts to provide motive force. Swimming is typically modeled as a purely vertical motion; however, asymmetric activations of swim pacemakers (sensory organs that innervate the muscle at eight locations around the bell margin) result in turning and more complicated swim behaviors. More recent studies have examined flow fields around turning jellyfish, but the input/output relationship between locomotive controls and swim trajectories is unclear. To address this, bell kinematics for both straight swimming and turning are obtained using 3D particle tracking velocimetry (3D-PTV) by injecting biocompatible elastomer tags into the bell, illuminating the tank with ultraviolet light, and tracking the resulting fluorescent particles in a multi-camera setup. By understanding these kinematics in both natural and externally controlled free-swimming animals, we can connect neuromuscular control mechanisms to existing flow measurements of jellyfish turning for applications in designing more energy efficient biohybrid robots and underwater vehicles. NSF GRFP.

  7. 2D virtual texture on 3D real object with coded structured light

    Science.gov (United States)

    Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick

    2008-02-01

    Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.

  8. Design of optical axis jitter control system for multi beam lasers based on FPGA

    Science.gov (United States)

    Ou, Long; Li, Guohui; Xie, Chuanlin; Zhou, Zhiqiang

    2018-02-01

    A design of optical axis closed-loop control system for multi beam lasers coherent combining based on FPGA was introduced. The system uses piezoelectric ceramics Fast Steering Mirrors (FSM) as actuator, the Fairfield spot detection of multi beam lasers by the high speed CMOS camera for optical detecting, a control system based on FPGA for real-time optical axis jitter suppression. The algorithm for optical axis centroid detecting and PID of anti-Integral saturation were realized by FPGA. Optimize the structure of logic circuit by reuse resource and pipeline, as a result of reducing logic resource but reduced the delay time, and the closed-loop bandwidth increases to 100Hz. The jitter of laser less than 40Hz was reduced 40dB. The cost of the system is low but it works stably.

  9. Multi-slicing strategy for the three-dimensional discontinuity layout optimization (3D DLO).

    Science.gov (United States)

    Zhang, Yiming

    2017-03-01

    Discontinuity layout optimization (DLO) is a recently presented topology optimization method for determining the critical layout of discontinuities and the associated upper bound limit load for plane two-dimensional and three-dimensional (3D) problems. The modelling process (pre-processing) for DLO includes defining the discontinuities inside a specified domain and building the target function and the global constraint matrix for the optimization solver, which has great influence on the the efficiency of the computation processes and the reliability of the final results. This paper focuses on efficient and reliable pre-processing of the discontinuities within the 3D DLO and presents a multi-slicing strategy, which naturally avoids the overlapping and crossing of different discontinuities. Furthermore, the formulation of the 3D discontinuity considering a shape of an arbitrary convex polygon is introduced, permitting the efficient assembly of the global constraint matrix. The proposed method eliminates unnecessary discontinuities in 3D DLO, making it possible to apply 3D DLO for solving large-scale engineering problems such as those involving landslides. Numerical examples including a footing test, a 3D landslide and a punch indentation are considered, illustrating the effectiveness of the presented method. © 2016 The Authors. International Journal for Numerical and Analytical Methods in Geomechanics published by John Wiley & Sons Ltd.

  10. Applications of multi-pinhole gamma camera collimation to tomography and image enhancement

    Science.gov (United States)

    Simpson, D. R.

    1981-06-01

    Multi-pinhole gamma camera collimation was introduced in the field of emission tomography. This collimation process simultaneously produces several images covering a limited angular range, which may then be recombined to obtain tomographic slices of the object imaged. A possible method for improving the images obtained by this technique by combining two multi-pinhole views taken 90 deg apart was investigated. Collimators were designed and built both for tomography and imaging tablet disintegration, and computer programs were written to reconstruct the images by simple backprojection and by filtered backprojection. The use of multi-pinhole collimators to image the disintegration of tablets in vivo was clearly demonstrated. Phantom tests done in vitro were capable of imaging defects as small as 5 sq mm, while images made with real tablets both in vitro and in vivo readily showed the onset and progress of the tablet disintegration.

  11. Development of high-speed video cameras

    Science.gov (United States)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  12. Full Body Pose Estimation During Occlusion using Multiple Cameras

    DEFF Research Database (Denmark)

    Fihl, Preben; Cosar, Serhan

    people is a very challenging problem for methods based on pictorials structure as for any other monocular pose estimation method. In this report we present work on a multi-view approach based on pictorial structures that integrate low level information from multiple calibrated cameras to improve the 2D...

  13. Matching Images to Models: Camera Calibration for 3-D Surface Reconstruction

    Science.gov (United States)

    Morris, Robin D.; Smelyanskiy, Vadim N.; Cheeseman. Peter C.; Norvig, Peter (Technical Monitor)

    2001-01-01

    In a previous paper we described a system which recursively recovers a super-resolved three dimensional surface model from a set of images of the surface. In that paper we assumed that the camera calibration for each image was known. In this paper we solve two problems. Firstly, if an estimate of the surface is already known, the problem is to calibrate a new image relative to the existing surface model. Secondly, if no surface estimate is available, the relative camera calibration between the images in the set must be estimated. This will allow an initial surface model to be estimated. Results of both types of estimation are given.

  14. COMPARISON BETWEEN RGB AND RGB-D CAMERAS FOR SUPPORTING LOW-COST GNSS URBAN NAVIGATION

    Directory of Open Access Journals (Sweden)

    L. Rossi

    2018-05-01

    Full Text Available A pure GNSS navigation is often unreliable in urban areas because of the presence of obstructions, thus preventing a correct reception of the satellite signal. The bridging between GNSS outages, as well as the vehicle attitude reconstruction, can be recovered by using complementary information, such as visual data acquired by RGB-D or RGB cameras. In this work, the possibility of integrating low-cost GNSS and visual data by means of an extended Kalman filter has been investigated. The focus is on the comparison between the use of RGB-D or RGB cameras. In particular, a Microsoft Kinect device (second generation and a mirrorless Canon EOS M RGB camera have been compared. The former is an interesting RGB-D camera because of its low-cost, easiness of use and raw data accessibility. The latter has been selected for the high-quality of the acquired images and for the possibility of mounting fixed focal length lenses with a lower weight and cost with respect to a reflex camera. The designed extended Kalman filter takes as input the GNSS-only trajectory and the relative orientation between subsequent pairs of images. Depending on the visual data acquisition system, the filter is different because RGB-D cameras acquire both RGB and depth data, allowing to solve the scale problem, which is instead typical of image-only solutions. The two systems and filtering approaches were assessed by ad-hoc experimental tests, showing that the use of a Kinect device for supporting a u-blox low-cost receiver led to a trajectory with a decimeter accuracy, that is 15 % better than the one obtained when using the Canon EOS M camera.

  15. Study on a High-frequency Multi-GNSS Real-time Precise Clock Estimation Algorithm and Application in GNSS Augment System

    Directory of Open Access Journals (Sweden)

    CHEN Liang

    2017-05-01

    Full Text Available GNSS satellite-based differential augment system is based on real-time orbit and clock augment message. The multi-GNSS real-time precise clock error estimation model is studied, and then the parameters estimated in traditional un-difference model are optimized and a high-efficient real-time clock simplified model is proposed and realized. The real-time orbit data processing based on PANDA is also analyzed. The results indicate that the real-time orbit radial accuracy of GPS, BeiDou MEO and Galileo is 1~5 cm, and the radial accuracy of the BeiDou GEO/IGSO satellite is about 10 cm. It is found that the optimized real-time clock simplified model is more efficient in one epoch than un-difference model and can be applied to high-frequency (such as 1 Hz updating of real-time clock augment message. The results show that the real-time clock error obtained by this model is absolute value and there is no constant bias. Based on the real-time orbit, the GPS real-time clock precision of the simplified model is about 0.24 ns, BeiDou GEO is about 0.50 ns, IGSO/MEO is about 0.22 ns and Galileo is about 0.32 ns. Using the multi-GNSS real-time data stream in GFZ, a multi-GNSS real-time augment prototype system is built and the real-time augment message is being broadcasted on the Internet. The real-time PPP centimeter-level service and meter-level navigation service based on pseudorange are realized based on this prototype system.

  16. Real-time free-viewpoint DIBR on GPUs for 3DTV systems

    NARCIS (Netherlands)

    Bravo, G.; Do, Q.L.; Zinger, S.; With, de P.H.N.

    2011-01-01

    Multi-view 3D may succeed stereo 3DTV in multimedia and TV applications. To this end, the MPEG committee has installed a special task force to establish a standard for multi-view 3D coding. One focal point of our research work is on an efficient implementation of the rendering part of such a

  17. Online 4D ultrasound guidance for real-time motion compensation by MLC tracking.

    Science.gov (United States)

    Ipsen, Svenja; Bruder, Ralf; O'Brien, Rick; Keall, Paul J; Schweikard, Achim; Poulsen, Per R

    2016-10-01

    With the trend in radiotherapy moving toward dose escalation and hypofractionation, the need for highly accurate targeting increases. While MLC tracking is already being successfully used for motion compensation of moving targets in the prostate, current real-time target localization methods rely on repeated x-ray imaging and implanted fiducial markers or electromagnetic transponders rather than direct target visualization. In contrast, ultrasound imaging can yield volumetric data in real-time (3D + time = 4D) without ionizing radiation. The authors report the first results of combining these promising techniques-online 4D ultrasound guidance and MLC tracking-in a phantom. A software framework for real-time target localization was installed directly on a 4D ultrasound station and used to detect a 2 mm spherical lead marker inside a water tank. The lead marker was rigidly attached to a motion stage programmed to reproduce nine characteristic tumor trajectories chosen from large databases (five prostate, four lung). The 3D marker position detected by ultrasound was transferred to a computer program for MLC tracking at a rate of 21.3 Hz and used for real-time MLC aperture adaption on a conventional linear accelerator. The tracking system latency was measured using sinusoidal trajectories and compensated for by applying a kernel density prediction algorithm for the lung traces. To measure geometric accuracy, static anterior and lateral conformal fields as well as a 358° arc with a 10 cm circular aperture were delivered for each trajectory. The two-dimensional (2D) geometric tracking error was measured as the difference between marker position and MLC aperture center in continuously acquired portal images. For dosimetric evaluation, VMAT treatment plans with high and low modulation were delivered to a biplanar diode array dosimeter using the same trajectories. Dose measurements with and without MLC tracking were compared to a static reference dose using 3%/3 mm and 2

  18. Helicopter Flight Test of a Compact, Real-Time 3-D Flash Lidar for Imaging Hazardous Terrain During Planetary Landing

    Science.gov (United States)

    Roback, VIncent E.; Amzajerdian, Farzin; Brewster, Paul F.; Barnes, Bruce W.; Kempton, Kevin S.; Reisse, Robert A.; Bulyshev, Alexander E.

    2013-01-01

    A second generation, compact, real-time, air-cooled 3-D imaging Flash Lidar sensor system, developed from a number of cutting-edge components from industry and NASA, is lab characterized and helicopter flight tested under the Autonomous Precision Landing and Hazard Detection and Avoidance Technology (ALHAT) project. The ALHAT project is seeking to develop a guidance, navigation, and control (GN&C) and sensing system based on lidar technology capable of enabling safe, precise crewed or robotic landings in challenging terrain on planetary bodies under any ambient lighting conditions. The Flash Lidar incorporates a 3-D imaging video camera based on Indium-Gallium-Arsenide Avalanche Photo Diode and novel micro-electronic technology for a 128 x 128 pixel array operating at a video rate of 20 Hz, a high pulse-energy 1.06 µm Neodymium-doped: Yttrium Aluminum Garnet (Nd:YAG) laser, a remote laser safety termination system, high performance transmitter and receiver optics with one and five degrees field-of-view (FOV), enhanced onboard thermal control, as well as a compact and self-contained suite of support electronics housed in a single box and built around a PC-104 architecture to enable autonomous operations. The Flash Lidar was developed and then characterized at two NASA-Langley Research Center (LaRC) outdoor laser test range facilities both statically and dynamically, integrated with other ALHAT GN&C subsystems from partner organizations, and installed onto a Bell UH-1H Iroquois "Huey" helicopter at LaRC. The integrated system was flight tested at the NASA-Kennedy Space Center (KSC) on simulated lunar approach to a custom hazard field consisting of rocks, craters, hazardous slopes, and safe-sites near the Shuttle Landing Facility runway starting at slant ranges of 750 m. In order to evaluate different methods of achieving hazard detection, the lidar, in conjunction with the ALHAT hazard detection and GN&C system, operates in both a narrow 1deg FOV raster

  19. 3D Polygon Mesh Compression with Multi Layer Feed Forward Neural Networks

    Directory of Open Access Journals (Sweden)

    Emmanouil Piperakis

    2003-06-01

    Full Text Available In this paper, an experiment is conducted which proves that multi layer feed forward neural networks are capable of compressing 3D polygon meshes. Our compression method not only preserves the initial accuracy of the represented object but also enhances it. The neural network employed includes the vertex coordinates, the connectivity and normal information in one compact form, converting the discrete and surface polygon representation into an analytic, solid colloquial. Furthermore, the 3D object in its compressed neural form can be directly - without decompression - used for rendering. The neural compression - representation is viable to 3D transformations without the need of any anti-aliasing techniques - transformations do not disrupt the accuracy of the geometry. Our method does not su.er any scaling problem and was tested with objects of 300 to 107 polygons - such as the David of Michelangelo - achieving in all cases an order of O(b3 less bits for the representation than any other commonly known compression method. The simplicity of our algorithm and the established mathematical background of neural networks combined with their aptness for hardware implementation can establish this method as a good solution for polygon compression and if further investigated, a novel approach for 3D collision, animation and morphing.

  20. Temporal analysis and scheduling of hard real-time radios running on a multi-processor

    NARCIS (Netherlands)

    Moreira, O.

    2012-01-01

    On a multi-radio baseband system, multiple independent transceivers must share the resources of a multi-processor, while meeting each its own hard real-time requirements. Not all possible combinations of transceivers are known at compile time, so a solution must be found that either allows for