WorldWideScience

Sample records for real-time image-guided robotic

  1. Image-guided robotic surgery.

    Science.gov (United States)

    Marescaux, Jacques; Solerc, Luc

    2004-06-01

    Medical image processing leads to an improvement in patient care by guiding the surgical gesture. Three-dimensional models of patients that are generated from computed tomographic scans or magnetic resonance imaging allow improved surgical planning and surgical simulation that offers the opportunity for a surgeon to train the surgical gesture before performing it for real. These two preoperative steps can be used intra-operatively because of the development of augmented reality, which consists of superimposing the preoperative three-dimensional model of the patient onto the real intraoperative view. Augmented reality provides the surgeon with a view of the patient in transparency and can also guide the surgeon, thanks to the real-time tracking of surgical tools during the procedure. When adapted to robotic surgery, this tool tracking enables visual serving with the ability to automatically position and control surgical robotic arms in three dimensions. It is also now possible to filter physiologic movements such as breathing or the heart beat. In the future, by combining augmented reality and robotics, these image-guided robotic systems will enable automation of the surgical procedure, which will be the next revolution in surgery.

  2. New real-time MR image-guided surgical robotic system for minimally invasive precision surgery

    Energy Technology Data Exchange (ETDEWEB)

    Hashizume, M.; Yasunaga, T.; Konishi, K. [Kyushu University, Department of Advanced Medical Initiatives, Faculty of Medical Sciences, Fukuoka (Japan); Tanoue, K.; Ieiri, S. [Kyushu University Hospital, Department of Advanced Medicine and Innovative Technology, Fukuoka (Japan); Kishi, K. [Hitachi Ltd, Mechanical Engineering Research Laboratory, Hitachinaka-Shi, Ibaraki (Japan); Nakamoto, H. [Hitachi Medical Corporation, Application Development Office, Kashiwa-Shi, Chiba (Japan); Ikeda, D. [Mizuho Ikakogyo Co. Ltd, Tokyo (Japan); Sakuma, I. [The University of Tokyo, Graduate School of Engineering, Bunkyo-Ku, Tokyo (Japan); Fujie, M. [Waseda University, Graduate School of Science and Engineering, Shinjuku-Ku, Tokyo (Japan); Dohi, T. [The University of Tokyo, Graduate School of Information Science and Technology, Bunkyo-Ku, Tokyo (Japan)

    2008-04-15

    To investigate the usefulness of a newly developed magnetic resonance (MR) image-guided surgical robotic system for minimally invasive laparoscopic surgery. The system consists of MR image guidance [interactive scan control (ISC) imaging, three-dimensional (3-D) navigation, and preoperative planning], an MR-compatible operating table, and an MR-compatible master-slave surgical manipulator that can enter the MR gantry. Using this system, we performed in vivo experiments with MR image-guided laparoscopic puncture on three pigs. We used a mimic tumor made of agarose gel and with a diameter of approximately 2 cm. All procedures were successfully performed. The operator only advanced the probe along the guidance device of the manipulator, which was adjusted on the basis of the preoperative plan, and punctured the target while maintaining the operative field using robotic forceps. The position of the probe was monitored continuously with 3-D navigation and 2-D ISC images, as well as the MR-compatible laparoscope. The ISC image was updated every 4 s; no artifact was detected. A newly developed MR image-guided surgical robotic system is feasible for an operator to perform safe and precise minimally invasive procedures. (orig.)

  3. New real-time MR image-guided surgical robotic system for minimally invasive precision surgery

    International Nuclear Information System (INIS)

    Hashizume, M.; Yasunaga, T.; Konishi, K.; Tanoue, K.; Ieiri, S.; Kishi, K.; Nakamoto, H.; Ikeda, D.; Sakuma, I.; Fujie, M.; Dohi, T.

    2008-01-01

    To investigate the usefulness of a newly developed magnetic resonance (MR) image-guided surgical robotic system for minimally invasive laparoscopic surgery. The system consists of MR image guidance [interactive scan control (ISC) imaging, three-dimensional (3-D) navigation, and preoperative planning], an MR-compatible operating table, and an MR-compatible master-slave surgical manipulator that can enter the MR gantry. Using this system, we performed in vivo experiments with MR image-guided laparoscopic puncture on three pigs. We used a mimic tumor made of agarose gel and with a diameter of approximately 2 cm. All procedures were successfully performed. The operator only advanced the probe along the guidance device of the manipulator, which was adjusted on the basis of the preoperative plan, and punctured the target while maintaining the operative field using robotic forceps. The position of the probe was monitored continuously with 3-D navigation and 2-D ISC images, as well as the MR-compatible laparoscope. The ISC image was updated every 4 s; no artifact was detected. A newly developed MR image-guided surgical robotic system is feasible for an operator to perform safe and precise minimally invasive procedures. (orig.)

  4. Estimate of the real-time respiratory simulation system in cyberknife image-guided radiosurgery

    International Nuclear Information System (INIS)

    Min, Chul Kee; Chung, Weon Kuu; Lee, Suk

    2010-01-01

    The purpose of this study was to evaluate the target accuracy according to the movement with respiration of an actual patient in a quantitative way by developing a real-time respiratory simulation system (RRSS), including a patient customized 3D moving phantom. The real-time respiratory simulation system (RRSS) consists of two robots in order to implement both the movement of body surfaces and the movement of internal organs caused by respiration. The quantitative evaluation for the 3D movement of the RRSS was performed using a real-time laser displacement sensor for each axis. The average difference in the static movement of the RRSS was about 0.01 ∼ 0.06 mm. Also, in the evaluation of the dynamic movement by producing a formalized sine wave with the phase of four seconds per cycle, the difference between the measured and the calculated values for each cycle length in the robot that was in charge of body surfaces and the robot that was in charge of the movement of internal tumors showed 0.10 ∼ 0.55 seconds, and the correlation coefficients between the calculated and the measured values were 0.998 ∼ 0.999. The differences between the maximum and the minimum amplitudes were 0.01 ∼ 0.06 mm, and the reproducibility was within ±0.5 mm. In the case of the application and non-application of respiration, the target errors were -0.05 ∼ 1.05 mm and -0.13 ∼ 0.74 mm, respectively, and the entire target errors were 1.30 mm and 0.79 mm, respectively. Based on the accuracy in the RRSS system, various respiration patterns of patients can be reproduced in real-time. Also, this system can be used as an optimal tool for applying patient customized accuracy management in image-guided radiosurgery.

  5. Estimate of the real-time respiratory simulation system in cyberknife image-guided radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Min, Chul Kee [Konyang Univ. Hospital, Daejeon (Korea, Republic of); Kyonggi University, Seoul (Korea, Republic of); Chung, Weon Kuu [Konyang Univ. Hospital, Daejeon (Korea, Republic of); Lee, Suk [Korea University, Seoul (Korea, Republic of); and others

    2010-01-15

    The purpose of this study was to evaluate the target accuracy according to the movement with respiration of an actual patient in a quantitative way by developing a real-time respiratory simulation system (RRSS), including a patient customized 3D moving phantom. The real-time respiratory simulation system (RRSS) consists of two robots in order to implement both the movement of body surfaces and the movement of internal organs caused by respiration. The quantitative evaluation for the 3D movement of the RRSS was performed using a real-time laser displacement sensor for each axis. The average difference in the static movement of the RRSS was about 0.01 {approx} 0.06 mm. Also, in the evaluation of the dynamic movement by producing a formalized sine wave with the phase of four seconds per cycle, the difference between the measured and the calculated values for each cycle length in the robot that was in charge of body surfaces and the robot that was in charge of the movement of internal tumors showed 0.10 {approx} 0.55 seconds, and the correlation coefficients between the calculated and the measured values were 0.998 {approx} 0.999. The differences between the maximum and the minimum amplitudes were 0.01 {approx} 0.06 mm, and the reproducibility was within {+-}0.5 mm. In the case of the application and non-application of respiration, the target errors were -0.05 {approx} 1.05 mm and -0.13 {approx} 0.74 mm, respectively, and the entire target errors were 1.30 mm and 0.79 mm, respectively. Based on the accuracy in the RRSS system, various respiration patterns of patients can be reproduced in real-time. Also, this system can be used as an optimal tool for applying patient customized accuracy management in image-guided radiosurgery.

  6. Body-mounted robotic instrument guide for image-guided cryotherapy of renal cancer

    Science.gov (United States)

    Hata, Nobuhiko; Song, Sang-Eun; Olubiyi, Olutayo; Arimitsu, Yasumichi; Fujimoto, Kosuke; Kato, Takahisa; Tuncali, Kemal; Tani, Soichiro; Tokuda, Junichi

    2016-01-01

    Purpose: Image-guided cryotherapy of renal cancer is an emerging alternative to surgical nephrectomy, particularly for those who cannot sustain the physical burden of surgery. It is well known that the outcome of this therapy depends on the accurate placement of the cryotherapy probe. Therefore, a robotic instrument guide may help physicians aim the cryotherapy probe precisely to maximize the efficacy of the treatment and avoid damage to critical surrounding structures. The objective of this paper was to propose a robotic instrument guide for orienting cryotherapy probes in image-guided cryotherapy of renal cancers. The authors propose a body-mounted robotic guide that is expected to be less susceptible to guidance errors caused by the patient’s whole body motion. Methods: Keeping the device’s minimal footprint in mind, the authors developed and validated a body-mounted, robotic instrument guide that can maintain the geometrical relationship between the device and the patient’s body, even in the presence of the patient’s frequent body motions. The guide can orient the cryotherapy probe with the skin incision point as the remote-center-of-motion. The authors’ validation studies included an evaluation of the mechanical accuracy and position repeatability of the robotic instrument guide. The authors also performed a mock MRI-guided cryotherapy procedure with a phantom to compare the advantage of robotically assisted probe replacements over a free-hand approach, by introducing organ motions to investigate their effects on the accurate placement of the cryotherapy probe. Measurements collected for performance analysis included accuracy and time taken for probe placements. Multivariate analysis was performed to assess if either or both organ motion and the robotic guide impacted these measurements. Results: The mechanical accuracy and position repeatability of the probe placement using the robotic instrument guide were 0.3 and 0.1 mm, respectively, at a depth

  7. Evolution of Robot-assisted ultrasound-guided breast biopsy systems

    Directory of Open Access Journals (Sweden)

    Mustafa Z. Mahmoud

    2018-01-01

    Full Text Available Robot-assisted ultrasound-guided breast biopsy combines ultrasound (US imaging with a robotic system for medical interventions. This study was designed to provide a literature review of a robotic US-guided breast biopsy system to delineate its efficacious impact on current medical practice. In addition, the strengths and limitations of this approach were also addressed. Articles published in the English language between 2000 and 2016 were appraised in this review. A wide range of systems that bind robotics with US imaging and guided breast biopsy were examined in this article. The fundamental safety and real-time imaging capabilities of US, together with the accuracy and maneuverability of robotic devices, is clearly an effective association with unmatched capabilities. Numerous experimental systems have obvious benefits over old-fashioned techniques, and the future of robot-assisted US-guided breast biopsy will be characterized by increasing levels of automation, and they hold tremendous possibility to impact doctor achievement, patient recovery, and clinical management.

  8. Enabling image fusion for a CT guided needle placement robot

    Science.gov (United States)

    Seifabadi, Reza; Xu, Sheng; Aalamifar, Fereshteh; Velusamy, Gnanasekar; Puhazhendi, Kaliyappan; Wood, Bradford J.

    2017-03-01

    Purpose: This study presents development and integration of hardware and software that enables ultrasound (US) and computer tomography (CT) fusion for a FDA-approved CT-guided needle placement robot. Having real-time US image registered to a priori-taken intraoperative CT image provides more anatomic information during needle insertion, in order to target hard-to-see lesions or avoid critical structures invisible to CT, track target motion, and to better monitor ablation treatment zone in relation to the tumor location. Method: A passive encoded mechanical arm is developed for the robot in order to hold and track an abdominal US transducer. This 4 degrees of freedom (DOF) arm is designed to attach to the robot end-effector. The arm is locked by default and is released by a press of button. The arm is designed such that the needle is always in plane with US image. The articulated arm is calibrated to improve its accuracy. Custom designed software (OncoNav, NIH) was developed to fuse real-time US image to a priori-taken CT. Results: The accuracy of the end effector before and after passive arm calibration was 7.07mm +/- 4.14mm and 1.74mm +/-1.60mm, respectively. The accuracy of the US image to the arm calibration was 5mm. The feasibility of US-CT fusion using the proposed hardware and software was demonstrated in an abdominal commercial phantom. Conclusions: Calibration significantly improved the accuracy of the arm in US image tracking. Fusion of US to CT using the proposed hardware and software was feasible.

  9. Three-dimensional ultrasound image-guided robotic system for accurate microwave coagulation of malignant liver tumours.

    Science.gov (United States)

    Xu, Jing; Jia, Zhen-zhong; Song, Zhang-jun; Yang, Xiang-dong; Chen, Ken; Liang, Ping

    2010-09-01

    The further application of conventional ultrasound (US) image-guided microwave (MW) ablation of liver cancer is often limited by two-dimensional (2D) imaging, inaccurate needle placement and the resulting skill requirement. The three-dimensional (3D) image-guided robotic-assisted system provides an appealing alternative option, enabling the physician to perform consistent, accurate therapy with improved treatment effectiveness. Our robotic system is constructed by integrating an imaging module, a needle-driven robot, a MW thermal field simulation module, and surgical navigation software in a practical and user-friendly manner. The robot executes precise needle placement based on the 3D model reconstructed from freehand-tracked 2D B-scans. A qualitative slice guidance method for fine registration is introduced to reduce the placement error caused by target motion. By incorporating the 3D MW specific absorption rate (SAR) model into the heat transfer equation, the MW thermal field simulation module determines the MW power level and the coagulation time for improved ablation therapy. Two types of wrists are developed for the robot: a 'remote centre of motion' (RCM) wrist and a non-RCM wrist, which is preferred in real applications. The needle placement accuracies were robot with the RCM wrist was improved to 1.6 +/- 1.0 mm when real-time 2D US feedback was used in the artificial-tissue phantom experiment. By using the slice guidance method, the robot with the non-RCM wrist achieved accuracy of 1.8 +/- 0.9 mm in the ex vivo experiment; even target motion was introduced. In the thermal field experiment, a 5.6% relative mean error was observed between the experimental coagulated neurosis volume and the simulation result. The proposed robotic system holds promise to enhance the clinical performance of percutaneous MW ablation of malignant liver tumours. Copyright 2010 John Wiley & Sons, Ltd.

  10. Magnetic resonance-compatible robotic and mechatronics systems for image-guided interventions and rehabilitation: a review study.

    Science.gov (United States)

    Tsekos, Nikolaos V; Khanicheh, Azadeh; Christoforou, Eftychios; Mavroidis, Constantinos

    2007-01-01

    The continuous technological progress of magnetic resonance imaging (MRI), as well as its widespread clinical use as a highly sensitive tool in diagnostics and advanced brain research, has brought a high demand for the development of magnetic resonance (MR)-compatible robotic/mechatronic systems. Revolutionary robots guided by real-time three-dimensional (3-D)-MRI allow reliable and precise minimally invasive interventions with relatively short recovery times. Dedicated robotic interfaces used in conjunction with fMRI allow neuroscientists to investigate the brain mechanisms of manipulation and motor learning, as well as to improve rehabilitation therapies. This paper gives an overview of the motivation, advantages, technical challenges, and existing prototypes for MR-compatible robotic/mechatronic devices.

  11. Deep architecture neural network-based real-time image processing for image-guided radiotherapy.

    Science.gov (United States)

    Mori, Shinichiro

    2017-08-01

    To develop real-time image processing for image-guided radiotherapy, we evaluated several neural network models for use with different imaging modalities, including X-ray fluoroscopic image denoising. Setup images of prostate cancer patients were acquired with two oblique X-ray fluoroscopic units. Two types of residual network were designed: a convolutional autoencoder (rCAE) and a convolutional neural network (rCNN). We changed the convolutional kernel size and number of convolutional layers for both networks, and the number of pooling and upsampling layers for rCAE. The ground-truth image was applied to the contrast-limited adaptive histogram equalization (CLAHE) method of image processing. Network models were trained to keep the quality of the output image close to that of the ground-truth image from the input image without image processing. For image denoising evaluation, noisy input images were used for the training. More than 6 convolutional layers with convolutional kernels >5×5 improved image quality. However, this did not allow real-time imaging. After applying a pair of pooling and upsampling layers to both networks, rCAEs with >3 convolutions each and rCNNs with >12 convolutions with a pair of pooling and upsampling layers achieved real-time processing at 30 frames per second (fps) with acceptable image quality. Use of our suggested network achieved real-time image processing for contrast enhancement and image denoising by the use of a conventional modern personal computer. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. The CI-ROB project: real time monitoring for a robotic prostate curietherapy

    International Nuclear Information System (INIS)

    Liem, X.; Lartigau, E.; Coelen, V.; Merzouki, R.

    2010-01-01

    The authors present a project which is still at the proto-typing stage and for which hardware and software are still being developed, and which aims at developing a full line for a robotic curietherapy. It comprises an articulated robotic arm with six degree of freedom, and is based on an auto-regulated loop with real time monitoring and control. An echographic probe acquires the prostate images in real time. An adaptive detection defines the prostate contour. This is performed in a virtual environment which comprises the prostate phantom, the robot and the intervention table. The target is defined in the virtual environment (image coordinates) and coordinates are transmitted to the robot controller which defines the robot movements by using the inverse geometric model. Short communication

  13. Real Time Indoor Robot Localization Using a Stationary Fisheye Camera

    OpenAIRE

    Delibasis , Konstantinos ,; Plagianakos , Vasilios ,; Maglogiannis , Ilias

    2013-01-01

    Part 7: Intelligent Signal and Image Processing; International audience; A core problem in robotics is the localization of a mobile robot (determination of the location or pose) in its environment, since the robot’s behavior depends on its position. In this work, we propose the use of a stationary fisheye camera for real time robot localization in indoor environments. We employ an image formation model for the fisheye camera, which is used for accelerating the segmentation of the robot’s top ...

  14. Integrated navigation and control software system for MRI-guided robotic prostate interventions.

    Science.gov (United States)

    Tokuda, Junichi; Fischer, Gregory S; DiMaio, Simon P; Gobbi, David G; Csoma, Csaba; Mewes, Philip W; Fichtinger, Gabor; Tempany, Clare M; Hata, Nobuhiko

    2010-01-01

    A software system to provide intuitive navigation for MRI-guided robotic transperineal prostate therapy is presented. In the system, the robot control unit, the MRI scanner, and the open-source navigation software are connected together via Ethernet to exchange commands, coordinates, and images using an open network communication protocol, OpenIGTLink. The system has six states called "workphases" that provide the necessary synchronization of all components during each stage of the clinical workflow, and the user interface guides the operator linearly through these workphases. On top of this framework, the software provides the following features for needle guidance: interactive target planning; 3D image visualization with current needle position; treatment monitoring through real-time MR images of needle trajectories in the prostate. These features are supported by calibration of robot and image coordinates by fiducial-based registration. Performance tests show that the registration error of the system was 2.6mm within the prostate volume. Registered real-time 2D images were displayed 1.97 s after the image location is specified. Copyright 2009 Elsevier Ltd. All rights reserved.

  15. Integrated navigation and control software system for MRI-guided robotic prostate interventions

    Science.gov (United States)

    Tokuda, Junichi; Fischer, Gregory S.; DiMaio, Simon P.; Gobbi, David G.; Csoma, Csaba; Mewes, Philip W.; Fichtinger, Gabor; Tempany, Clare M.; Hata, Nobuhiko

    2010-01-01

    A software system to provide intuitive navigation for MRI-guided robotic transperineal prostate therapy is presented. In the system, the robot control unit, the MRI scanner, and the open-source navigation software are connected together via Ethernet to exchange commands, coordinates, and images using an open network communication protocol, OpenIGTLink. The system has six states called “workphases” that provide the necessary synchronization of all components during each stage of the clinical workflow, and the user interface guides the operator linearly through these workphases. On top of this framework, the software provides the following features for needle guidance: interactive target planning; 3D image visualization with current needle position; treatment monitoring through real-time MR images of needle trajectories in the prostate. These features are supported by calibration of robot and image coordinates by fiducial-based registration. Performance tests show that the registration error of the system was 2.6 mm within the prostate volume. Registered real-time 2D images were displayed 1.97 s after the image location is specified. PMID:19699057

  16. Real-time Fluorescence Image-Guided Oncologic Surgery

    Science.gov (United States)

    Mondal, Suman B.; Gao, Shengkui; Zhu, Nan; Liang, Rongguang; Gruev, Viktor; Achilefu, Samuel

    2014-01-01

    Medical imaging plays a critical role in cancer diagnosis and planning. Many of these patients rely on surgical intervention for curative outcomes. This requires a careful identification of the primary and microscopic tumors, and the complete removal of cancer. Although there have been efforts to adapt traditional imaging modalities for intraoperative image guidance, they suffer from several constraints such as large hardware footprint, high operation cost, and disruption of the surgical workflow. Because of the ease of image acquisition, relatively low cost devices and intuitive operation, optical imaging methods have received tremendous interests for use in real-time image-guided surgery. To improve imaging depth under low interference by tissue autofluorescence, many of these applications utilize light in the near-infra red (NIR) wavelengths, which is invisible to human eyes. With the availability of a wide selection of tumor-avid contrast agents, advancements in imaging sensors, electronic and optical designs, surgeons are able to combine different attributes of NIR optical imaging techniques to improve treatment outcomes. The emergence of diverse commercial and experimental image guidance systems, which are in various stages of clinical translation, attests to the potential high impact of intraoperative optical imaging methods to improve speed of oncologic surgery with high accuracy and minimal margin positivity. PMID:25287689

  17. Robotic 4D ultrasound solution for real-time visualization and teleoperation

    Directory of Open Access Journals (Sweden)

    Al-Badri Mohammed

    2017-09-01

    Full Text Available Automation of the image acquisition process via robotic solutions offer a large leap towards resolving ultrasound’s user-dependency. This paper, as part of a larger project aimed to develop a multipurpose 4d-ultrasonic force-sensitive robot for medical applications, focuses on achieving real-time remote visualisation for 4d ultrasound image transfer. This was possible through implementing our software modification on a GE Vivid 7 Dimension workstation, which operates a matrix array probe controlled by a KUKA LBR iiwa 7 7-DOF robotic arm. With the help of robotic positioning and the matrix array probe, fast volumetric imaging of target regions was feasible. By testing ultrasound volumes, which were roughly 880 kB in size, while using gigabit Ethernet connection, a latency of ∼57 ms was achievable for volume transfer between the ultrasound station and a remote client application, which as a result allows a frame count of 17.4 fps. Our modification thus offers for the first time real-time remote visualization, recording and control of 4d ultrasound data, which can be implemented in teleoperation.

  18. Frame-less image-guided intracranial and extracranial radiosurgery using the Cyberknife robotic system

    International Nuclear Information System (INIS)

    Gibbs, I.C.

    2006-01-01

    The Cyberknife TM is an image-guided robotic radiosurgery system. The image guidance system includes a kilo-voltage X-ray imaging source and amorphous silica detectors. The radiation delivery device is a mobile X-band linear accelerator mounted onto a robotic arm. Through a highly complex interplay between the image guidance system, an automated couch, and the high-speed linear accelerator, near real-time tracking of the target is achieved. The Cyberknife TM gained Federal Drug Administration clearance in the United States in 2001 for treatment of tumors 'anywhere in the body where radiation treatment is indicated'. Because the Cyberknife TM system does not rely on rigid fixation of a stereotactic frame, tumors outside of the intracranial compartment, even those tumors that move with respiration can be treated with a similar degree of ease as intracranial targets. A description of the Cyberknife TM technology and a review of some of the current intracranial and extracranial applications are detailed herein. (author)

  19. An integrated multimodality image-guided robot system for small-animal imaging research

    International Nuclear Information System (INIS)

    Hsu, Wen-Lin; Hsin Wu, Tung; Hsu, Shih-Ming; Chen, Chia-Lin; Lee, Jason J.S.; Huang, Yung-Hui

    2011-01-01

    We design and construct an image-guided robot system for use in small-animal imaging research. This device allows the use of co-registered small-animal PET-MRI images to guide the movements of robotic controllers, which will accurately place a needle probe at any predetermined location inside, for example, a mouse tumor, for biological readouts without sacrificing the animal. This system is composed of three major components: an automated robot device, a CCD monitoring mechanism, and a multimodality registration implementation. Specifically, the CCD monitoring mechanism was used for correction and validation of the robot device. To demonstrate the value of the proposed system, we performed a tumor hypoxia study that involved FMISO small-animal PET imaging and the delivering of a pO 2 probe into the mouse tumor using the image-guided robot system. During our evaluation, the needle positioning error was found to be within 0.153±0.042 mm of desired placement; the phantom simulation errors were within 0.693±0.128 mm. In small-animal studies, the pO 2 probe measurements in the corresponding hypoxia areas showed good correlation with significant, low tissue oxygen tensions (less than 6 mmHg). We have confirmed the feasibility of the system and successfully applied it to small-animal investigations. The system could be easily adapted to extend to other biomedical investigations in the future.

  20. Automated dental implantation using image-guided robotics: registration results.

    Science.gov (United States)

    Sun, Xiaoyan; McKenzie, Frederic D; Bawab, Sebastian; Li, Jiang; Yoon, Yongki; Huang, Jen-K

    2011-09-01

    One of the most important factors affecting the outcome of dental implantation is the accurate insertion of the implant into the patient's jaw bone, which requires a high degree of anatomical accuracy. With the accuracy and stability of robots, image-guided robotics is expected to provide more reliable and successful outcomes for dental implantation. Here, we proposed the use of a robot for drilling the implant site in preparation for the insertion of the implant. An image-guided robotic system for automated dental implantation is described in this paper. Patient-specific 3D models are reconstructed from preoperative Cone-beam CT images, and implantation planning is performed with these virtual models. A two-step registration procedure is applied to transform the preoperative plan of the implant insertion into intra-operative operations of the robot with the help of a Coordinate Measurement Machine (CMM). Experiments are carried out with a phantom that is generated from the patient-specific 3D model. Fiducial Registration Error (FRE) and Target Registration Error (TRE) values are calculated to evaluate the accuracy of the registration procedure. FRE values are less than 0.30 mm. Final TRE values after the two-step registration are 1.42 ± 0.70 mm (N = 5). The registration results of an automated dental implantation system using image-guided robotics are reported in this paper. Phantom experiments show that the practice of robot in the dental implantation is feasible and the system accuracy is comparable to other similar systems for dental implantation.

  1. Evaluation of a robotic technique for transrectal MRI-guided prostate biopsies

    Energy Technology Data Exchange (ETDEWEB)

    Schouten, Martijn G. [Radboud University Nijmegen Medical Centre, Department of Radiology, Nijmegen (Netherlands); University Medical Centre Nijmegen, Department of Radiology, Nijmegen (Netherlands); Bomers, Joyce G.R.; Yakar, Derya; Huisman, Henkjan; Bosboom, Dennis; Scheenen, Tom W.J.; Fuetterer, Jurgen J. [Radboud University Nijmegen Medical Centre, Department of Radiology, Nijmegen (Netherlands); Rothgang, Eva [Pattern Recognition Lab, Friedrich-Alexander-University of Erlangen-Nuremberg, Erlangen (Germany); Center for Applied Medical Imaging, Siemens Corporate Research (Germany); Center for Applied Medical Imaging, Siemens Corporate Research, Baltimore, MD (United States); Misra, Sarthak [University of Twente, MIRA-Institute of Biomedical Technology and Technical Medicine, Enschede (Netherlands)

    2012-02-15

    To evaluate the accuracy and speed of a novel robotic technique as an aid to perform magnetic resonance image (MRI)-guided prostate biopsies on patients with cancer suspicious regions. A pneumatic controlled MR-compatible manipulator with 5 degrees of freedom was developed in-house to guide biopsies under real-time imaging. From 13 consecutive biopsy procedures, the targeting error, biopsy error and target displacement were calculated to evaluate the accuracy. The time was recorded to evaluate manipulation and procedure time. The robotic and manual techniques demonstrated comparable results regarding mean targeting error (5.7 vs 5.8 mm, respectively) and mean target displacement (6.6 vs 6.0 mm, respectively). The mean biopsy error was larger (6.5 vs 4.4 mm) when using the robotic technique, although not significant. Mean procedure and manipulation time were 76 min and 6 min, respectively using the robotic technique and 61 and 8 min with the manual technique. Although comparable results regarding accuracy and speed were found, the extended technical effort of the robotic technique make the manual technique - currently - more suitable to perform MRI-guided biopsies. Furthermore, this study provided a better insight in displacement of the target during in vivo biopsy procedures. (orig.)

  2. Evaluation of a robotic technique for transrectal MRI-guided prostate biopsies

    International Nuclear Information System (INIS)

    Schouten, Martijn G.; Bomers, Joyce G.R.; Yakar, Derya; Huisman, Henkjan; Bosboom, Dennis; Scheenen, Tom W.J.; Fuetterer, Jurgen J.; Rothgang, Eva; Misra, Sarthak

    2012-01-01

    To evaluate the accuracy and speed of a novel robotic technique as an aid to perform magnetic resonance image (MRI)-guided prostate biopsies on patients with cancer suspicious regions. A pneumatic controlled MR-compatible manipulator with 5 degrees of freedom was developed in-house to guide biopsies under real-time imaging. From 13 consecutive biopsy procedures, the targeting error, biopsy error and target displacement were calculated to evaluate the accuracy. The time was recorded to evaluate manipulation and procedure time. The robotic and manual techniques demonstrated comparable results regarding mean targeting error (5.7 vs 5.8 mm, respectively) and mean target displacement (6.6 vs 6.0 mm, respectively). The mean biopsy error was larger (6.5 vs 4.4 mm) when using the robotic technique, although not significant. Mean procedure and manipulation time were 76 min and 6 min, respectively using the robotic technique and 61 and 8 min with the manual technique. Although comparable results regarding accuracy and speed were found, the extended technical effort of the robotic technique make the manual technique - currently - more suitable to perform MRI-guided biopsies. Furthermore, this study provided a better insight in displacement of the target during in vivo biopsy procedures. (orig.)

  3. A real time tracking vision system and its application to robotics

    International Nuclear Information System (INIS)

    Inoue, Hirochika

    1994-01-01

    Among various sensing channels the vision is most important for making robot intelligent. If provided with a high speed visual tracking capability, the robot-environment interaction becomes dynamic instead of static, and thus the potential repertoire of robot behavior becomes very rich. For this purpose we developed a real-time tracking vision system. The fundamental operation on which our system based is the calculation of correlation between local images. Use of special chip for correlation and the multi-processor configuration enable the robot to track more than hundreds cues in full video rate. In addition to the fundamental visual performance, applications for robot behavior control are also introduced. (author)

  4. Technique for Targeting Arteriovenous Malformations Using Frameless Image-Guided Robotic Radiosurgery

    International Nuclear Information System (INIS)

    Hristov, Dimitre; Liu, Lina; Adler, John R.; Gibbs, Iris C.; Moore, Teri; Sarmiento, Marily; Chang, Steve D.; Dodd, Robert; Marks, Michael; Do, Huy M.

    2011-01-01

    Purpose: To integrate three-dimensional (3D) digital rotation angiography (DRA) and two-dimensional (2D) digital subtraction angiography (DSA) imaging into a targeting methodology enabling comprehensive image-guided robotic radiosurgery of arteriovenous malformations (AVMs). Methods and Materials: DRA geometric integrity was evaluated by imaging a phantom with embedded markers. Dedicated DSA acquisition modes with preset C-arm positions were configured. The geometric reproducibility of the presets was determined, and its impact on localization accuracy was evaluated. An imaging protocol composed of anterior-posterior and lateral DSA series in combination with a DRA run without couch displacement between acquisitions was introduced. Software was developed for registration of DSA and DRA (2D-3D) images to correct for: (a) small misalignments of the C-arm with respect to the estimated geometry of the set positions and (b) potential patient motion between image series. Within the software, correlated navigation of registered DRA and DSA images was incorporated to localize AVMs within a 3D image coordinate space. Subsequent treatment planning and delivery followed a standard image-guided robotic radiosurgery process. Results: DRA spatial distortions were typically smaller than 0.3 mm throughout a 145-mm x 145-mm x 145-mm volume. With 2D-3D image registration, localization uncertainties resulting from the achievable reproducibility of the C-arm set positions could be reduced to about 0.2 mm. Overall system-related localization uncertainty within the DRA coordinate space was 0.4 mm. Image-guided frameless robotic radiosurgical treatments with this technique were initiated. Conclusions: The integration of DRA and DSA into the process of nidus localization increases the confidence with which radiosurgical ablation of AVMs can be performed when using only an image-guided technique. Such an approach can increase patient comfort, decrease time pressure on clinical and

  5. Real-time image mosaicing for medical applications.

    Science.gov (United States)

    Loewke, Kevin E; Camarillo, David B; Jobst, Christopher A; Salisbury, J Kenneth

    2007-01-01

    In this paper we describe the development of a robotically-assisted image mosaicing system for medical applications. The processing occurs in real-time due to a fast initial image alignment provided by robotic position sensing. Near-field imaging, defined by relatively large camera motion, requires translations as well as pan and tilt orientations to be measured. To capture these measurements we use 5-d.o.f. sensing along with a hand-eye calibration to account for sensor offset. This sensor-based approach speeds up the mosaicing, eliminates cumulative errors, and readily handles arbitrary camera motions. Our results have produced visually satisfactory mosaics on a dental model but can be extended to other medical images.

  6. Using real-time stereopsis for mobile robot control

    Science.gov (United States)

    Bonasso, R. P.; Nishihara, H. K.

    1991-02-01

    This paper describes on-going work in using range and motion data generated at video-frame rates as the basis for long-range perception in a mobile robot. A current approach in the artificial intelligence community to achieve timecritical perception for situated reasoning is to use low-level perception for motor reflex-like activity and higher-level but more computationally intense perception for path planning reconnaissance and retrieval activities. Typically inclinometers and a compass or an infra-red beacon system provide stability and orientation maintenance and ultrasonic or infra-red sensors serve as proximity detectors for obstacle avoidance. For distant ranging and area occupancy determination active imaging systems such as laser scanners can be prohibitivtly expensive and heretofore passive systems typically performed more slowly than the cycle time of the control system causing the robot to halt periodically along its way. However a recent stereo system developed by Nishihara known as PRISM (Practical Real-time Imaging Stereo Matcher) matches stereo pairs using a sign-correlation technique that gives range and motion at video frame rates. We are integrating this technique with constant-time control software for distant ranging and object detection at a speed that is comparable with the cycle-times of the low-level sensors. Possibilities for a variety of uses in a leader-follower mobile robot situation are discussed.

  7. Humanoid Robotics: Real-Time Object Oriented Programming

    Science.gov (United States)

    Newton, Jason E.

    2005-01-01

    Programming of robots in today's world is often done in a procedural oriented fashion, where object oriented programming is not incorporated. In order to keep a robust architecture allowing for easy expansion of capabilities and a truly modular design, object oriented programming is required. However, concepts in object oriented programming are not typically applied to a real time environment. The Fujitsu HOAP-2 is the test bed for the development of a humanoid robot framework abstracting control of the robot into simple logical commands in a real time robotic system while allowing full access to all sensory data. In addition to interfacing between the motor and sensory systems, this paper discusses the software which operates multiple independently developed control systems simultaneously and the safety measures which keep the humanoid from damaging itself and its environment while running these systems. The use of this software decreases development time and costs and allows changes to be made while keeping results safe and predictable.

  8. The first clinical implementation of real-time image-guided adaptive radiotherapy using a standard linear accelerator.

    Science.gov (United States)

    Keall, Paul J; Nguyen, Doan Trang; O'Brien, Ricky; Caillet, Vincent; Hewson, Emily; Poulsen, Per Rugaard; Bromley, Regina; Bell, Linda; Eade, Thomas; Kneebone, Andrew; Martin, Jarad; Booth, Jeremy T

    2018-04-01

    Until now, real-time image guided adaptive radiation therapy (IGART) has been the domain of dedicated cancer radiotherapy systems. The purpose of this study was to clinically implement and investigate real-time IGART using a standard linear accelerator. We developed and implemented two real-time technologies for standard linear accelerators: (1) Kilovoltage Intrafraction Monitoring (KIM) that finds the target and (2) multileaf collimator (MLC) tracking that aligns the radiation beam to the target. Eight prostate SABR patients were treated with this real-time IGART technology. The feasibility, geometric accuracy and the dosimetric fidelity were measured. Thirty-nine out of forty fractions with real-time IGART were successful (95% confidence interval 87-100%). The geometric accuracy of the KIM system was -0.1 ± 0.4, 0.2 ± 0.2 and -0.1 ± 0.6 mm in the LR, SI and AP directions, respectively. The dose reconstruction showed that real-time IGART more closely reproduced the planned dose than that without IGART. For the largest motion fraction, with real-time IGART 100% of the CTV received the prescribed dose; without real-time IGART only 95% of the CTV would have received the prescribed dose. The clinical implementation of real-time image-guided adaptive radiotherapy on a standard linear accelerator using KIM and MLC tracking is feasible. This achievement paves the way for real-time IGART to be a mainstream treatment option. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. SMR-CL, A Real-time Control Language for Mobile Robots

    DEFF Research Database (Denmark)

    Andersen, Nils Axel; Ravn, Ole

    2004-01-01

    The paper describes requirements and implementation of a tactical control lan¬guage for mobile robots. Emphasis is given to the real-time issues of the language especially the isolation of the hard real-time and the soft real-time layers of the mobile robot control system. The language may be used...

  10. Real-time Stereoscopic 3D for E-Robotics Learning

    Directory of Open Access Journals (Sweden)

    Richard Y. Chiou

    2011-02-01

    Full Text Available Following the design and testing of a successful 3-Dimensional surveillance system, this 3D scheme has been implemented into online robotics learning at Drexel University. A real-time application, utilizing robot controllers, programmable logic controllers and sensors, has been developed in the “MET 205 Robotics and Mechatronics” class to provide the students with a better robotic education. The integration of the 3D system allows the students to precisely program the robot and execute functions remotely. Upon the students’ recommendation, polarization has been chosen to be the main platform behind the 3D robotic system. Stereoscopic calculations are carried out for calibration purposes to display the images with the highest possible comfort-level and 3D effect. The calculations are further validated by comparing the results with students’ evaluations. Due to the Internet-based feature, multiple clients have the opportunity to perform the online automation development. In the future, students, in different universities, will be able to cross-control robotic components of different types around the world. With the development of this 3D ERobotics interface, automation resources and robotic learning can be shared and enriched regardless of location.

  11. FPGA-based High-Performance Collision Detection: An Enabling Technique for Image-Guided Robotic Surgery

    Directory of Open Access Journals (Sweden)

    Zhaorui Zhang

    2016-08-01

    Full Text Available Collision detection, which refers to the computational problem of finding the relative placement or con-figuration of two or more objects, is an essential component of many applications in computer graphics and robotics. In image-guided robotic surgery, real-time collision detection is critical for preserving healthy anatomical structures during the surgical procedure. However, the computational complexity of the problem usually results in algorithms that operate at low speed. In this paper, we present a fast and accurate algorithm for collision detection between Oriented-Bounding-Boxes (OBBs that is suitable for real-time implementation. Our proposed Sweep and Prune algorithm can perform a preliminary filtering to reduce the number of objects that need to be tested by the classical Separating Axis Test algorithm, while the OBB pairs of interest are preserved. These OBB pairs are re-checked by the Separating Axis Test algorithm to obtain accurate overlapping status between them. To accelerate the execution, our Sweep and Prune algorithm is tailor-made for the proposed method. Meanwhile, a high performance scalable hardware architecture is proposed by analyzing the intrinsic parallelism of our algorithm, and is implemented on FPGA platform. Results show that our hardware design on the FPGA platform can achieve around 8X higher running speed than the software design on a CPU platform. As a result, the proposed algorithm can achieve a collision frame rate of 1 KHz, and fulfill the requirement for the medical surgery scenario of Robot Assisted Laparoscopy.

  12. Implementing real-time robotic systems using CHIMERA II

    Science.gov (United States)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1990-01-01

    A description is given of the CHIMERA II programming environment and operating system, which was developed for implementing real-time robotic systems. Sensor-based robotic systems contain both general- and special-purpose hardware, and thus the development of applications tends to be a time-consuming task. The CHIMERA II environment is designed to reduce the development time by providing a convenient software interface between the hardware and the user. CHIMERA II supports flexible hardware configurations which are based on one or more VME-backplanes. All communication across multiple processors is transparent to the user through an extensive set of interprocessor communication primitives. CHIMERA II also provides a high-performance real-time kernel which supports both deadline and highest-priority-first scheduling. The flexibility of CHIMERA II allows hierarchical models for robot control, such as NASREM, to be implemented with minimal programming time and effort.

  13. A Kinect-based real-time compressive tracking prototype system for amphibious spherical robots.

    Science.gov (United States)

    Pan, Shaowu; Shi, Liwei; Guo, Shuxiang

    2015-04-08

    A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT), which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V) tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.

  14. A Kinect-Based Real-Time Compressive Tracking Prototype System for Amphibious Spherical Robots

    Directory of Open Access Journals (Sweden)

    Shaowu Pan

    2015-04-01

    Full Text Available A visual tracking system is essential as a basis for visual servoing, autonomous navigation, path planning, robot-human interaction and other robotic functions. To execute various tasks in diverse and ever-changing environments, a mobile robot requires high levels of robustness, precision, environmental adaptability and real-time performance of the visual tracking system. In keeping with the application characteristics of our amphibious spherical robot, which was proposed for flexible and economical underwater exploration in 2012, an improved RGB-D visual tracking algorithm is proposed and implemented. Given the limited power source and computational capabilities of mobile robots, compressive tracking (CT, which is the effective and efficient algorithm that was proposed in 2012, was selected as the basis of the proposed algorithm to process colour images. A Kalman filter with a second-order motion model was implemented to predict the state of the target and select candidate patches or samples for the CT tracker. In addition, a variance ratio features shift (VR-V tracker with a Kalman estimation mechanism was used to process depth images. Using a feedback strategy, the depth tracking results were used to assist the CT tracker in updating classifier parameters at an adaptive rate. In this way, most of the deficiencies of CT, including drift and poor robustness to occlusion and high-speed target motion, were partly solved. To evaluate the proposed algorithm, a Microsoft Kinect sensor, which combines colour and infrared depth cameras, was adopted for use in a prototype of the robotic tracking system. The experimental results with various image sequences demonstrated the effectiveness, robustness and real-time performance of the tracking system.

  15. Real-time registration of 3D to 2D ultrasound images for image-guided prostate biopsy.

    Science.gov (United States)

    Gillies, Derek J; Gardi, Lori; De Silva, Tharindu; Zhao, Shuang-Ren; Fenster, Aaron

    2017-09-01

    During image-guided prostate biopsy, needles are targeted at tissues that are suspicious of cancer to obtain specimen for histological examination. Unfortunately, patient motion causes targeting errors when using an MR-transrectal ultrasound (TRUS) fusion approach to augment the conventional biopsy procedure. This study aims to develop an automatic motion correction algorithm approaching the frame rate of an ultrasound system to be used in fusion-based prostate biopsy systems. Two modes of operation have been investigated for the clinical implementation of the algorithm: motion compensation using a single user initiated correction performed prior to biopsy, and real-time continuous motion compensation performed automatically as a background process. Retrospective 2D and 3D TRUS patient images acquired prior to biopsy gun firing were registered using an intensity-based algorithm utilizing normalized cross-correlation and Powell's method for optimization. 2D and 3D images were downsampled and cropped to estimate the optimal amount of image information that would perform registrations quickly and accurately. The optimal search order during optimization was also analyzed to avoid local optima in the search space. Error in the algorithm was computed using target registration errors (TREs) from manually identified homologous fiducials in a clinical patient dataset. The algorithm was evaluated for real-time performance using the two different modes of clinical implementations by way of user initiated and continuous motion compensation methods on a tissue mimicking prostate phantom. After implementation in a TRUS-guided system with an image downsampling factor of 4, the proposed approach resulted in a mean ± std TRE and computation time of 1.6 ± 0.6 mm and 57 ± 20 ms respectively. The user initiated mode performed registrations with in-plane, out-of-plane, and roll motions computation times of 108 ± 38 ms, 60 ± 23 ms, and 89 ± 27 ms, respectively, and corresponding

  16. Faster-than-real-time robot simulation for plan development and robot safety

    International Nuclear Information System (INIS)

    Crane, C.D. III; Dalton, R.; Ogles, J.; Tulenko, J.S.; Zhou, X.

    1990-01-01

    The University of Florida, in cooperation with the Universities of Texas, Tennessee, and Michigan and Oak Ridge National Laboratory (ORNL), is developing an advanced robotic system for the US Department of Energy under the University Program for Robotics for Advanced Reactors. As part of this program, the University of Florida has been pursuing the development of a faster-than-real-time robotic simulation program for planning and control of mobile robotic operations to ensure the efficient and safe operation of mobile robots in nuclear power plants and other hazardous environments

  17. A multimodal interface for real-time soldier-robot teaming

    Science.gov (United States)

    Barber, Daniel J.; Howard, Thomas M.; Walter, Matthew R.

    2016-05-01

    Recent research and advances in robotics have led to the development of novel platforms leveraging new sensing capabilities for semantic navigation. As these systems becoming increasingly more robust, they support highly complex commands beyond direct teleoperation and waypoint finding facilitating a transition away from robots as tools to robots as teammates. Supporting future Soldier-Robot teaming requires communication capabilities on par with human-human teams for successful integration of robots. Therefore, as robots increase in functionality, it is equally important that the interface between the Soldier and robot advances as well. Multimodal communication (MMC) enables human-robot teaming through redundancy and levels of communications more robust than single mode interaction. Commercial-off-the-shelf (COTS) technologies released in recent years for smart-phones and gaming provide tools for the creation of portable interfaces incorporating MMC through the use of speech, gestures, and visual displays. However, for multimodal interfaces to be successfully used in the military domain, they must be able to classify speech, gestures, and process natural language in real-time with high accuracy. For the present study, a prototype multimodal interface supporting real-time interactions with an autonomous robot was developed. This device integrated COTS Automated Speech Recognition (ASR), a custom gesture recognition glove, and natural language understanding on a tablet. This paper presents performance results (e.g. response times, accuracy) of the integrated device when commanding an autonomous robot to perform reconnaissance and surveillance activities in an unknown outdoor environment.

  18. Fully automated MRI-guided robotics for prostate brachytherapy

    International Nuclear Information System (INIS)

    Stoianovici, D.; Vigaru, B.; Petrisor, D.; Muntener, M.; Patriciu, A.; Song, D.

    2008-01-01

    The uncertainties encountered in the deployment of brachytherapy seeds are related to the commonly used ultrasound imager and the basic instrumentation used for the implant. An alternative solution is under development in which a fully automated robot is used to place the seeds according to the dosimetry plan under direct MRI-guidance. Incorporation of MRI-guidance creates potential for physiological and molecular image-guided therapies. Moreover, MRI-guided brachytherapy is also enabling for re-estimating dosimetry during the procedure, because with the MRI the seeds already implanted can be localised. An MRI compatible robot (MrBot) was developed. The robot is designed for transperineal percutaneous prostate interventions, and customised for fully automated MRI-guided brachytherapy. With different end-effectors, the robot applies to other image-guided interventions of the prostate. The robot is constructed of non-magnetic and dielectric materials and is electricity free using pneumatic actuation and optic sensing. A new motor (PneuStep) was purposely developed to set this robot in motion. The robot fits alongside the patient in closed-bore MRI scanners. It is able to stay fully operational during MR imaging without deteriorating the quality of the scan. In vitro, cadaver, and animal tests showed millimetre needle targeting accuracy, and very precise seed placement. The robot tested without any interference up to 7T. The robot is the first fully automated robot to function in MRI scanners. Its first application is MRI-guided seed brachytherapy. It is capable of automated, highly accurate needle placement. Extensive testing is in progress prior to clinical trials. Preliminary results show that the robot may become a useful image-guided intervention instrument. (author)

  19. Image-guided radiotherapy in near real time with intensity-modulated radiotherapy megavoltage treatment beam imaging.

    Science.gov (United States)

    Mao, Weihua; Hsu, Annie; Riaz, Nadeem; Lee, Louis; Wiersma, Rodney; Luxton, Gary; King, Christopher; Xing, Lei; Solberg, Timothy

    2009-10-01

    To utilize image-guided radiotherapy (IGRT) in near real time by obtaining and evaluating the online positions of implanted fiducials from continuous electronic portal imaging device (EPID) imaging of prostate intensity-modulated radiotherapy (IMRT) delivery. Upon initial setup using two orthogonal images, the three-dimensional (3D) positions of all implanted fiducial markers are obtained, and their expected two-dimensional (2D) locations in the beam's-eye-view (BEV) projection are calculated for each treatment field. During IMRT beam delivery, EPID images of the megavoltage treatment beam are acquired in cine mode and subsequently analyzed to locate 2D locations of fiducials in the BEV. Simultaneously, 3D positions are estimated according to the current EPID image, information from the setup portal images, and images acquired at other gantry angles (the completed treatment fields). The measured 2D and 3D positions of each fiducial are compared with their expected 2D and 3D setup positions, respectively. Any displacements larger than a predefined tolerance may cause the treatment system to suspend the beam delivery and direct the therapists to reposition the patient. Phantom studies indicate that the accuracy of 2D BEV and 3D tracking are better than 1 mm and 1.4 mm, respectively. A total of 7330 images from prostate treatments were acquired and analyzed, showing a maximum 2D displacement of 6.7 mm and a maximum 3D displacement of 6.9 mm over 34 fractions. This EPID-based, real-time IGRT method can be implemented on any external beam machine with portal imaging capabilities without purchasing any additional equipment, and there is no extra dose delivered to the patient.

  20. Image-Guided Surgical Robotic System for Percutaneous Reduction of Joint Fractures.

    Science.gov (United States)

    Dagnino, Giulio; Georgilas, Ioannis; Morad, Samir; Gibbons, Peter; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja

    2017-11-01

    Complex joint fractures often require an open surgical procedure, which is associated with extensive soft tissue damages and longer hospitalization and rehabilitation time. Percutaneous techniques can potentially mitigate these risks but their application to joint fractures is limited by the current sub-optimal 2D intra-operative imaging (fluoroscopy) and by the high forces involved in the fragment manipulation (due to the presence of soft tissue, e.g., muscles) which might result in fracture malreduction. Integration of robotic assistance and 3D image guidance can potentially overcome these issues. The authors propose an image-guided surgical robotic system for the percutaneous treatment of knee joint fractures, i.e., the robot-assisted fracture surgery (RAFS) system. It allows simultaneous manipulation of two bone fragments, safer robot-bone fixation system, and a traction performing robotic manipulator. This system has led to a novel clinical workflow and has been tested both in laboratory and in clinically relevant cadaveric trials. The RAFS system was tested on 9 cadaver specimens and was able to reduce 7 out of 9 distal femur fractures (T- and Y-shape 33-C1) with acceptable accuracy (≈1 mm, ≈5°), demonstrating its applicability to fix knee joint fractures. This study paved the way to develop novel technologies for percutaneous treatment of complex fractures including hip, ankle, and shoulder, thus representing a step toward minimally-invasive fracture surgeries.

  1. Real-time stereo generation for surgical vision during minimal invasive robotic surgery

    Science.gov (United States)

    Laddi, Amit; Bhardwaj, Vijay; Mahapatra, Prasant; Pankaj, Dinesh; Kumar, Amod

    2016-03-01

    This paper proposes a framework for 3D surgical vision for minimal invasive robotic surgery. It presents an approach for generating the three dimensional view of the in-vivo live surgical procedures from two images captured by very small sized, full resolution camera sensor rig. A pre-processing scheme is employed to enhance the image quality and equalizing the color profile of two images. Polarized Projection using interlacing two images give a smooth and strain free three dimensional view. The algorithm runs in real time with good speed at full HD resolution.

  2. Image-Guided Robotic Stereotactic Body Radiation Therapy for Liver Metastases: Is There a Dose Response Relationship?

    International Nuclear Information System (INIS)

    Vautravers-Dewas, Claire; Dewas, Sylvain; Bonodeau, Francois; Adenis, Antoine; Lacornerie, Thomas; Penel, Nicolas; Lartigau, Eric; Mirabel, Xavier

    2011-01-01

    Purpose: To evaluate the outcome, tolerance, and toxicity of stereotactic body radiotherapy, using image-guided robotic radiation delivery, for the treatment of patients with unresectable liver metastases. Methods and Material: Patients were treated with real-time respiratory tracking between July 2007 and April 2009. Their records were retrospectively reviewed. Metastases from colorectal carcinoma and other primaries were not necessarily confined to liver. Toxicity was evaluated using National Cancer Institute Common Criteria for Adverse Events version 3.0. Results: Forty-two patients with 62 metastases were treated with two dose levels of 40 Gy in four Dose per Fraction (23) and 45 Gy in three Dose per Fraction (13). Median follow-up was 14.3 months (range, 3-23 months). Actuarial local control for 1 and 2 years was 90% and 86%, respectively. At last follow-up, 41 (66%) complete responses and eight (13%) partial responses were observed. Five lesions were stable. Nine lesions (13%) were locally progressed. Overall survival was 94% at 1 year and 48% at 2 years. The most common toxicity was Grade 1 or 2 nausea. One patient experienced Grade 3 epidermitis. The dose level did not significantly contribute to the outcome, toxicity, or survival. Conclusion: Image-guided robotic stereotactic body radiation therapy is feasible, safe, and effective, with encouraging local control. It provides a strong alternative for patients who cannot undergo surgery.

  3. Real Time Mapping and Dynamic Navigation for Mobile Robots

    Directory of Open Access Journals (Sweden)

    Maki K. Habib

    2008-11-01

    Full Text Available This paper discusses the importance, the complexity and the challenges of mapping mobile robot?s unknown and dynamic environment, besides the role of sensors and the problems inherited in map building. These issues remain largely an open research problems in developing dynamic navigation systems for mobile robots. The paper presenst the state of the art in map building and localization for mobile robots navigating within unknown environment, and then introduces a solution for the complex problem of autonomous map building and maintenance method with focus on developing an incremental grid based mapping technique that is suitable for real-time obstacle detection and avoidance. In this case, the navigation of mobile robots can be treated as a problem of tracking geometric features that occur naturally in the environment of the robot. The robot maps its environment incrementally using the concept of occupancy grids and the fusion of multiple ultrasonic sensory information while wandering in it and stay away from all obstacles. To ensure real-time operation with limited resources, as well as to promote extensibility, the mapping and obstacle avoidance modules are deployed in parallel and distributed framework. Simulation based experiments has been conducted and illustrated to show the validity of the developed mapping and obstacle avoidance approach.

  4. Image-Guided Radiotherapy in Near Real Time With Intensity-Modulated Radiotherapy Megavoltage Treatment Beam Imaging

    International Nuclear Information System (INIS)

    Mao Weihua; Hsu, Annie; Riaz, Nadeem; Lee, Louis; Wiersma, Rodney; Luxton, Gary; King, Christopher; Xing Lei; Solberg, Timothy

    2009-01-01

    Purpose: To utilize image-guided radiotherapy (IGRT) in near real time by obtaining and evaluating the online positions of implanted fiducials from continuous electronic portal imaging device (EPID) imaging of prostate intensity-modulated radiotherapy (IMRT) delivery. Methods and Materials: Upon initial setup using two orthogonal images, the three-dimensional (3D) positions of all implanted fiducial markers are obtained, and their expected two-dimensional (2D) locations in the beam's-eye-view (BEV) projection are calculated for each treatment field. During IMRT beam delivery, EPID images of the megavoltage treatment beam are acquired in cine mode and subsequently analyzed to locate 2D locations of fiducials in the BEV. Simultaneously, 3D positions are estimated according to the current EPID image, information from the setup portal images, and images acquired at other gantry angles (the completed treatment fields). The measured 2D and 3D positions of each fiducial are compared with their expected 2D and 3D setup positions, respectively. Any displacements larger than a predefined tolerance may cause the treatment system to suspend the beam delivery and direct the therapists to reposition the patient. Results: Phantom studies indicate that the accuracy of 2D BEV and 3D tracking are better than 1 mm and 1.4 mm, respectively. A total of 7330 images from prostate treatments were acquired and analyzed, showing a maximum 2D displacement of 6.7 mm and a maximum 3D displacement of 6.9 mm over 34 fractions. Conclusions: This EPID-based, real-time IGRT method can be implemented on any external beam machine with portal imaging capabilities without purchasing any additional equipment, and there is no extra dose delivered to the patient.

  5. Magnetic particle imaging: advancements and perspectives for real-time in vivo monitoring and image-guided therapy

    Science.gov (United States)

    Pablico-Lansigan, Michele H.; Situ, Shu F.; Samia, Anna Cristina S.

    2013-05-01

    Magnetic particle imaging (MPI) is an emerging biomedical imaging technology that allows the direct quantitative mapping of the spatial distribution of superparamagnetic iron oxide nanoparticles. MPI's increased sensitivity and short image acquisition times foster the creation of tomographic images with high temporal and spatial resolution. The contrast and sensitivity of MPI is envisioned to transcend those of other medical imaging modalities presently used, such as magnetic resonance imaging (MRI), X-ray scans, ultrasound, computed tomography (CT), positron emission tomography (PET) and single photon emission computed tomography (SPECT). In this review, we present an overview of the recent advances in the rapidly developing field of MPI. We begin with a basic introduction of the fundamentals of MPI, followed by some highlights over the past decade of the evolution of strategies and approaches used to improve this new imaging technique. We also examine the optimization of iron oxide nanoparticle tracers used for imaging, underscoring the importance of size homogeneity and surface engineering. Finally, we present some future research directions for MPI, emphasizing the novel and exciting opportunities that it offers as an important tool for real-time in vivo monitoring. All these opportunities and capabilities that MPI presents are now seen as potential breakthrough innovations in timely disease diagnosis, implant monitoring, and image-guided therapeutics.

  6. Real-time continuous image-guided surgery: Preclinical investigation in glossectomy.

    Science.gov (United States)

    Tabanfar, Reza; Qiu, Jimmy; Chan, Harley; Aflatouni, Niousha; Weersink, Robert; Hasan, Wael; Irish, Jonathan C

    2017-10-01

    To develop, validate, and study the efficacy of an intraoperative real-time continuous image-guided surgery (RTC-IGS) system for glossectomy. Prospective study. We created a RTC-IGS system and surgical simulator for glossectomy, enabling definition of a surgical target preoperatively, real-time cautery tracking, and display of a surgical plan intraoperatively. System performance was evaluated by a group of otolaryngology residents, fellows, medical students, and staff under a reproducible setting by using realistic tongue phantoms. Evaluators were grouped into a senior and a junior group based on surgical experience, and guided and unguided tumor resections were performed. National Aeronautics and Space Administration Task Load Index (NASA-TLX) scores and a Likert scale were used to measure workloads and impressions of the system, respectively. Efficacy was studied by comparing surgical accuracy, time, collateral damage, and workload between RTC-IGS and non-navigated resections. The senior group performed more accurately (80.9% ± 3.7% vs. 75.2% ± 5.5%, P = .28), required less time (5.0 ± 1.3 minutes vs. 7.3 ± 1.2 minutes, P = .17), and experienced lower workload (43 ± 2.0 vs. 64.4 ± 1.3 NASA-TLX score, P = .08), suggesting a trend of construct validity. Impressions were favorable, with participants reporting the system is a valuable practice tool (4.0/5 ± 0.3) and increases confidence (3.9/5 ± 0.4). Use of RTC-IGS improved both groups' accuracy, with the junior group improving from 64.4% ± 5.4% to 75.2% ± 5.5% (P = .01) and the senior group improving from 76.1% ± 4.5% to 80.9% ± 3.7% (P = .16). We created an RTC-IGS system and surgical simulator and demonstrated a trend of construct validity. Our navigated simulator allows junior trainees to practice glossectomies outside the operating room. In all evaluators, navigation assistance resulted in increased surgical accuracy. NA Laryngoscope, 127:E347-E353, 2017. © 2017 The American Laryngological

  7. Real-Time Augmented Reality for Robotic-Assisted Surgery

    DEFF Research Database (Denmark)

    Jørgensen, Martin Kibsgaard; Kraus, Martin

    2015-01-01

    Training in robotic-assisted minimally invasive surgery is crucial, but the training with actual surgery robots is relatively expensive. Therefore, improving the efficiency of this training is of great interest in robotic surgical education. One of the current limitations of this training is the ......-dimensional computer graphics in real time. Our system makes it possible to easily deploy new user interfaces for robotic-assisted surgery training. The system has been positively evaluated by two experienced instructors in robot-assisted surgery....... is the limited visual communication between the instructor and the trainee. As the trainee's view is limited to that of the surgery robot's camera, even a simple task such as pointing is difficult. We present a compact system to overlay the video streams of the da Vinci surgery systems with interactive three...

  8. Advanced real-time multi-display educational system (ARMES): An innovative real-time audiovisual mentoring tool for complex robotic surgery.

    Science.gov (United States)

    Lee, Joong Ho; Tanaka, Eiji; Woo, Yanghee; Ali, Güner; Son, Taeil; Kim, Hyoung-Il; Hyung, Woo Jin

    2017-12-01

    The recent scientific and technologic advances have profoundly affected the training of surgeons worldwide. We describe a novel intraoperative real-time training module, the Advanced Robotic Multi-display Educational System (ARMES). We created a real-time training module, which can provide a standardized step by step guidance to robotic distal subtotal gastrectomy with D2 lymphadenectomy procedures, ARMES. The short video clips of 20 key steps in the standardized procedure for robotic gastrectomy were created and integrated with TilePro™ software to delivery on da Vinci Surgical Systems (Intuitive Surgical, Sunnyvale, CA). We successfully performed the robotic distal subtotal gastrectomy with D2 lymphadenectomy for patient with gastric cancer employing this new teaching method without any transfer errors or system failures. Using this technique, the total operative time was 197 min and blood loss was 50 mL and there were no intra- or post-operative complications. Our innovative real-time mentoring module, ARMES, enables standardized, systematic guidance during surgical procedures. © 2017 Wiley Periodicals, Inc.

  9. Real-Time Analysis of Beats in Music for Entertainment Robots

    OpenAIRE

    Yue-Der Lin; Ting-Tsao Wu; Yu-Ren Chen; Yen-Ting Lin; Wen-Hsiu Chen; Shih-Fan Wang; Jinghom Chakhap

    2012-01-01

    The dancing actions for entertainment robots are usually designed in advance and saved in a database according to the beats and rhythm of the given music. This research is devoted to developing a real-time algorithm that can detect the primary information of the music needed for the actions of entertainment robots. The computation of the proposed algorithm is very efficient and can satisfy the requirement of real-time processing by a digital signal controller. The digitized music signal is fi...

  10. Telerobotic system concept for real-time soft-tissue imaging during radiotherapy beam delivery.

    Science.gov (United States)

    Schlosser, Jeffrey; Salisbury, Kenneth; Hristov, Dimitre

    2010-12-01

    The curative potential of external beam radiation therapy is critically dependent on having the ability to accurately aim radiation beams at intended targets while avoiding surrounding healthy tissues. However, existing technologies are incapable of real-time, volumetric, soft-tissue imaging during radiation beam delivery, when accurate target tracking is most critical. The authors address this challenge in the development and evaluation of a novel, minimally interfering, telerobotic ultrasound (U.S.) imaging system that can be integrated with existing medical linear accelerators (LINACs) for therapy guidance. A customized human-safe robotic manipulator was designed and built to control the pressure and pitch of an abdominal U.S. transducer while avoiding LINAC gantry collisions. A haptic device was integrated to remotely control the robotic manipulator motion and U.S. image acquisition outside the LINAC room. The ability of the system to continuously maintain high quality prostate images was evaluated in volunteers over extended time periods. Treatment feasibility was assessed by comparing a clinically deployed prostate treatment plan to an alternative plan in which beam directions were restricted to sectors that did not interfere with the transabdominal U.S. transducer. To demonstrate imaging capability concurrent with delivery, robot performance and U.S. target tracking in a phantom were tested with a 15 MV radiation beam active. Remote image acquisition and maintenance of image quality with the haptic interface was successfully demonstrated over 10 min periods in representative treatment setups of volunteers. Furthermore, the robot's ability to maintain a constant probe force and desired pitch angle was unaffected by the LINAC beam. For a representative prostate patient, the dose-volume histogram (DVH) for a plan with restricted sectors remained virtually identical to the DVH of a clinically deployed plan. With reduced margins, as would be enabled by real-time

  11. Telerobotic system concept for real-time soft-tissue imaging during radiotherapy beam delivery

    International Nuclear Information System (INIS)

    Schlosser, Jeffrey; Salisbury, Kenneth; Hristov, Dimitre

    2010-01-01

    Purpose: The curative potential of external beam radiation therapy is critically dependent on having the ability to accurately aim radiation beams at intended targets while avoiding surrounding healthy tissues. However, existing technologies are incapable of real-time, volumetric, soft-tissue imaging during radiation beam delivery, when accurate target tracking is most critical. The authors address this challenge in the development and evaluation of a novel, minimally interfering, telerobotic ultrasound (U.S.) imaging system that can be integrated with existing medical linear accelerators (LINACs) for therapy guidance. Methods: A customized human-safe robotic manipulator was designed and built to control the pressure and pitch of an abdominal U.S. transducer while avoiding LINAC gantry collisions. A haptic device was integrated to remotely control the robotic manipulator motion and U.S. image acquisition outside the LINAC room. The ability of the system to continuously maintain high quality prostate images was evaluated in volunteers over extended time periods. Treatment feasibility was assessed by comparing a clinically deployed prostate treatment plan to an alternative plan in which beam directions were restricted to sectors that did not interfere with the transabdominal U.S. transducer. To demonstrate imaging capability concurrent with delivery, robot performance and U.S. target tracking in a phantom were tested with a 15 MV radiation beam active. Results: Remote image acquisition and maintenance of image quality with the haptic interface was successfully demonstrated over 10 min periods in representative treatment setups of volunteers. Furthermore, the robot's ability to maintain a constant probe force and desired pitch angle was unaffected by the LINAC beam. For a representative prostate patient, the dose-volume histogram (DVH) for a plan with restricted sectors remained virtually identical to the DVH of a clinically deployed plan. With reduced margins, as

  12. Design and real-time control of a robotic system for fracture manipulation.

    Science.gov (United States)

    Dagnino, G; Georgilas, I; Tarassoli, P; Atkins, R; Dogramadzi, S

    2015-08-01

    This paper presents the design, development and control of a new robotic system for fracture manipulation. The objective is to improve the precision, ergonomics and safety of the traditional surgical procedure to treat joint fractures. The achievements toward this direction are here reported and include the design, the real-time control architecture and the evaluation of a new robotic manipulator system. The robotic manipulator is a 6-DOF parallel robot with the struts developed as linear actuators. The control architecture is also described here. The high-level controller implements a host-target structure composed by a host computer (PC), a real-time controller, and an FPGA. A graphical user interface was designed allowing the surgeon to comfortably automate and monitor the robotic system. The real-time controller guarantees the determinism of the control algorithms adding an extra level of safety for the robotic automation. The system's positioning accuracy and repeatability have been demonstrated showing a maximum positioning RMSE of 1.18 ± 1.14mm (translations) and 1.85 ± 1.54° (rotations).

  13. Localized irradiation of mouse legs using an image-guided robotic linear accelerator.

    Science.gov (United States)

    Kufeld, Markus; Escobar, Helena; Marg, Andreas; Pasemann, Diana; Budach, Volker; Spuler, Simone

    2017-04-01

    To investigate the potential of human satellite cells in muscle regeneration small animal models are useful to evaluate muscle regeneration. To suppress the inherent regeneration ability of the tibialis muscle of mice before transplantation of human muscle fibers, a localized irradiation of the mouse leg should be conducted. We analyzed the feasibility of an image-guided robotic irradiation procedure, a routine treatment method in radiation oncology, for the focal irradiation of mouse legs. After conducting a planning computed tomography (CT) scan of one mouse in its customized mold a three-dimensional dose plan was calculated using a dedicated planning workstation. 18 Gy have been applied to the right anterior tibial muscle of 4 healthy and 12 mice with immune defect in general anesthesia using an image-guided robotic linear accelerator (LINAC). The mice were fixed in a customized acrylic mold with attached fiducial markers for image guided tracking. All 16 mice could be irradiated as prevised without signs of acute radiation toxicity or anesthesiological side effects. The animals survived until scarification after 8, 21 and 49 days as planned. The procedure was straight forward and the irradiation process took 5 minutes to apply the dose of 18 Gy. Localized irradiation of mice legs using a robotic LINAC could be conducted as planned. It is a feasible procedure without recognizable side effects. Image guidance offers precise dose delivery and preserves adjacent body parts and tissues.

  14. MicROS-drt: supporting real-time and scalable data distribution in distributed robotic systems.

    Science.gov (United States)

    Ding, Bo; Wang, Huaimin; Fan, Zedong; Zhang, Pengfei; Liu, Hui

    A primary requirement in distributed robotic software systems is the dissemination of data to all interested collaborative entities in a timely and scalable manner. However, providing such a service in a highly dynamic and resource-limited robotic environment is a challenging task, and existing robot software infrastructure has limitations in this aspect. This paper presents a novel robot software infrastructure, micROS-drt, which supports real-time and scalable data distribution. The solution is based on a loosely coupled data publish-subscribe model with the ability to support various time-related constraints. And to realize this model, a mature data distribution standard, the data distribution service for real-time systems (DDS), is adopted as the foundation of the transport layer of this software infrastructure. By elaborately adapting and encapsulating the capability of the underlying DDS middleware, micROS-drt can meet the requirement of real-time and scalable data distribution in distributed robotic systems. Evaluation results in terms of scalability, latency jitter and transport priority as well as the experiment on real robots validate the effectiveness of this work.

  15. Feasibility of real-time magnetic resonance imaging-guided endomyocardial biopsies: An in-vitro study.

    Science.gov (United States)

    Lossnitzer, Dirk; Seitz, Sebastian A; Krautz, Birgit; Schnackenburg, Bernhard; André, Florian; Korosoglou, Grigorios; Katus, Hugo A; Steen, Henning

    2015-07-26

    To investigate if magnetic resonance (MR)-guided biopsy can improve the performance and safety of such procedures. A novel MR-compatible bioptome was evaluated in a series of in-vitro experiments in a 1.5T magnetic resonance imaging (MRI) system. The bioptome was inserted into explanted porcine and bovine hearts under real-time MR-guidance employing a steady state free precession sequence. The artifact produced by the metal element at the tip and the signal voids caused by the bioptome were visually tracked for navigation and allowed its constant and precise localization. Cardiac structural elements and the target regions for the biopsy were clearly visible. Our method allowed a significantly better spatial visualization of the bioptoms tip compared to conventional X-ray guidance. The specific device design of the bioptome avoided inducible currents and therefore subsequent heating. The novel MR-compatible bioptome provided a superior cardiovascular magnetic resonance (imaging) soft-tissue visualization for MR-guided myocardial biopsies. Not at least the use of MRI guidance for endomyocardial biopsies completely avoided radiation exposure for both patients and interventionalists. MRI-guided endomyocardial biopsies provide a better than conventional X-ray guided navigation and could therefore improve the specificity and reproducibility of cardiac biopsies in future studies.

  16. A Fully Actuated Robotic Assistant for MRI-Guided Prostate Biopsy and Brachytherapy

    Science.gov (United States)

    Li, Gang; Su, Hao; Shang, Weijian; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fischer, Gregory S.

    2014-01-01

    Intra-operative medical imaging enables incorporation of human experience and intelligence in a controlled, closed-loop fashion. Magnetic resonance imaging (MRI) is an ideal modality for surgical guidance of diagnostic and therapeutic procedures, with its ability to perform high resolution, real-time, high soft tissue contrast imaging without ionizing radiation. However, for most current image-guided approaches only static pre-operative images are accessible for guidance, which are unable to provide updated information during a surgical procedure. The high magnetic field, electrical interference, and limited access of closed-bore MRI render great challenges to developing robotic systems that can perform inside a diagnostic high-field MRI while obtaining interactively updated MR images. To overcome these limitations, we are developing a piezoelectrically actuated robotic assistant for actuated percutaneous prostate interventions under real-time MRI guidance. Utilizing a modular design, the system enables coherent and straight forward workflow for various percutaneous interventions, including prostate biopsy sampling and brachytherapy seed placement, using various needle driver configurations. The unified workflow compromises: 1) system hardware and software initialization, 2) fiducial frame registration, 3) target selection and motion planning, 4) moving to the target and performing the intervention (e.g. taking a biopsy sample) under live imaging, and 5) visualization and verification. Phantom experiments of prostate biopsy and brachytherapy were executed under MRI-guidance to evaluate the feasibility of the workflow. The robot successfully performed fully actuated biopsy sampling and delivery of simulated brachytherapy seeds under live MR imaging, as well as precise delivery of a prostate brachytherapy seed distribution with an RMS accuracy of 0.98mm. PMID:25076821

  17. The CI-ROB project: real time monitoring for a robotic prostate curietherapy; Projet CI-ROB: surveillance en temps reel pour curietherapie robotisee de prostate

    Energy Technology Data Exchange (ETDEWEB)

    Liem, X.; Lartigau, E. [Centre Oscar-Lambret, 59 - Lille (France); Coelen, V.; Merzouki, R. [Polytech-Lille, 59 - Lille (France)

    2010-10-15

    The authors present a project which is still at the proto-typing stage and for which hardware and software are still being developed, and which aims at developing a full line for a robotic curietherapy. It comprises an articulated robotic arm with six degree of freedom, and is based on an auto-regulated loop with real time monitoring and control. An echographic probe acquires the prostate images in real time. An adaptive detection defines the prostate contour. This is performed in a virtual environment which comprises the prostate phantom, the robot and the intervention table. The target is defined in the virtual environment (image coordinates) and coordinates are transmitted to the robot controller which defines the robot movements by using the inverse geometric model. Short communication

  18. Real-time Non-linear Target Tracking Control of Wheeled Mobile Robots

    Institute of Scientific and Technical Information of China (English)

    YU Wenyong

    2006-01-01

    A control strategy for real-time target tracking for wheeled mobile robots is presented. Using a modified Kalman filter for environment perception, a novel tracking control law derived from Lyapunov stability theory is introduced. Tuning of linear velocity and angular velocity with mechanical constraints is applied. The proposed control system can simultaneously solve the target trajectory prediction, real-time tracking, and posture regulation problems of a wheeled mobile robot. Experimental results illustrate the effectiveness of the proposed tracking control laws.

  19. Using Sun’s Java Real-Time System to Manage Behavior-Based Mobile Robot Controllers

    Directory of Open Access Journals (Sweden)

    Andrew McKenzie

    2011-01-01

    Full Text Available Implementing a robot controller that can effectively manage limited resources in a deterministic, real-time manner is challenging. Behavior-based architectures that decompose autonomy into levels of intelligence are popular due to their robustness but do not provide real-time features that enforce timing constraints or support determinism. We propose an architecture and approach for using the real-time features of the Real-Time Specification for Java (RTSJ in a behavior-based mobile robot controller to show that timing constraints affect performance. This is accomplished by extending a real-time aware architecture that explicitly enumerates timing requirements for each behavior. It is not enough to reduce latency. The usefulness of this approach is demonstrated via an implementation on Solaris 10 and the Sun Java Real-Time System (Java RTS. Experimental results are obtained using a K-team Koala robot performing path following with four composite behaviors. Experiments were conducted using several task period sets in three cases: real-time threads with the real-time garbage collector, real-time threads with the non- real-time garbage collector, and non-real-time threads with the non-real-time garbage collector. Results show that even if latency and determinism are improved, the timing of each individual behavior significantly affects task performance.

  20. Closed-Loop Real-Time Imaging Enables Fully Automated Cell-Targeted Patch-Clamp Neural Recording In Vivo.

    Science.gov (United States)

    Suk, Ho-Jun; van Welie, Ingrid; Kodandaramaiah, Suhasa B; Allen, Brian; Forest, Craig R; Boyden, Edward S

    2017-08-30

    Targeted patch-clamp recording is a powerful method for characterizing visually identified cells in intact neural circuits, but it requires skill to perform. We previously developed an algorithm that automates "blind" patching in vivo, but full automation of visually guided, targeted in vivo patching has not been demonstrated, with currently available approaches requiring human intervention to compensate for cell movement as a patch pipette approaches a targeted neuron. Here we present a closed-loop real-time imaging strategy that automatically compensates for cell movement by tracking cell position and adjusting pipette motion while approaching a target. We demonstrate our system's ability to adaptively patch, under continuous two-photon imaging and real-time analysis, fluorophore-expressing neurons of multiple types in the living mouse cortex, without human intervention, with yields comparable to skilled human experimenters. Our "imagepatching" robot is easy to implement and will help enable scalable characterization of identified cell types in intact neural circuits. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Towards Real-Time Distributed Planning in Multi-Robot Systems

    KAUST Repository

    Abdelkader, Mohamed

    2018-04-01

    Recently, there has been an increasing interest in robotics related to multi-robot applications. Such systems can be involved in several tasks such as collaborative search and rescue, aerial transportation, surveillance, and monitoring, to name a few. There are two possible architectures for the autonomous control of multi-robot systems. In the centralized architecture, a master controller communicates with all the robots to collect information. It uses this information to make decisions for the entire system and then sends commands to each robot. In contrast, in the distributed architecture, each robot makes its own decision independent from a central authority. While distributed architecture is a more portable solution, it comes at the expense of extensive information exchange (communication). The extensive communication between robots can result in decision delays because of which distributed architecture is often impractical for systems with strict real-time constraints, e.g. when decisions have to be taken in the order of milliseconds. In this thesis, we propose a distributed framework that strikes a balance between limited communicated information and reasonable system-wide performance while running in real-time. We implement the proposed approach in a game setting of two competing teams of drones, defenders and attackers. Defending drones execute a proposed linear program algorithm (using only onboard computing modules) to obstruct attackers from infiltrating a defense zone while having minimal local message passing. Another main contribution is that we developed a realistic simulation environment as well as lab and outdoor hardware setups of customized drones for testing the system in realistic scenarios. Our software is completely open-source and fully integrated with the well-known Robot Operating System (ROS) in hopes to make our work easily reproducible and for rapid future improvements.

  2. A machine learning approach for real-time modelling of tissue deformation in image-guided neurosurgery.

    Science.gov (United States)

    Tonutti, Michele; Gras, Gauthier; Yang, Guang-Zhong

    2017-07-01

    Accurate reconstruction and visualisation of soft tissue deformation in real time is crucial in image-guided surgery, particularly in augmented reality (AR) applications. Current deformation models are characterised by a trade-off between accuracy and computational speed. We propose an approach to derive a patient-specific deformation model for brain pathologies by combining the results of pre-computed finite element method (FEM) simulations with machine learning algorithms. The models can be computed instantaneously and offer an accuracy comparable to FEM models. A brain tumour is used as the subject of the deformation model. Load-driven FEM simulations are performed on a tetrahedral brain mesh afflicted by a tumour. Forces of varying magnitudes, positions, and inclination angles are applied onto the brain's surface. Two machine learning algorithms-artificial neural networks (ANNs) and support vector regression (SVR)-are employed to derive a model that can predict the resulting deformation for each node in the tumour's mesh. The tumour deformation can be predicted in real time given relevant information about the geometry of the anatomy and the load, all of which can be measured instantly during a surgical operation. The models can predict the position of the nodes with errors below 0.3mm, beyond the general threshold of surgical accuracy and suitable for high fidelity AR systems. The SVR models perform better than the ANN's, with positional errors for SVR models reaching under 0.2mm. The results represent an improvement over existing deformation models for real time applications, providing smaller errors and high patient-specificity. The proposed approach addresses the current needs of image-guided surgical systems and has the potential to be employed to model the deformation of any type of soft tissue. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. [Image guided and robotic treatment--the advance of cybernetics in clinical medicine].

    Science.gov (United States)

    Fosse, E; Elle, O J; Samset, E; Johansen, M; Røtnes, J S; Tønnessen, T I; Edwin, B

    2000-01-10

    The introduction of advanced technology in hospitals has changed the treatment practice towards more image guided and minimal invasive procedures. Modern computer and communication technology opens up for robot aided and pre-programmed intervention. Several robotic systems are in clinical use today both in microsurgery and in major cardiac and orthopedic operations. As this trend develops, professions which are new in this context such as physicists, mathematicians and cybernetic engineers will be increasingly important in the treatment of patients.

  4. Towards Real-Time Distributed Planning in Multi-Robot Systems

    KAUST Repository

    Abdelkader, Mohamed

    2018-01-01

    of extensive information exchange (communication). The extensive communication between robots can result in decision delays because of which distributed architecture is often impractical for systems with strict real-time constraints, e.g. when decisions have

  5. Real-Time Inverse Optimal Neural Control for Image Based Visual Servoing with Nonholonomic Mobile Robots

    Directory of Open Access Journals (Sweden)

    Carlos López-Franco

    2015-01-01

    Full Text Available We present an inverse optimal neural controller for a nonholonomic mobile robot with parameter uncertainties and unknown external disturbances. The neural controller is based on a discrete-time recurrent high order neural network (RHONN trained with an extended Kalman filter. The reference velocities for the neural controller are obtained with a visual sensor. The effectiveness of the proposed approach is tested by simulations and real-time experiments.

  6. 1.0 T open-configuration magnetic resonance-guided microwave ablation of pig livers in real time

    Science.gov (United States)

    Dong, Jun; Zhang, Liang; Li, Wang; Mao, Siyue; Wang, Yiqi; Wang, Deling; Shen, Lujun; Dong, Annan; Wu, Peihong

    2015-01-01

    The current fastest frame rate of each single image slice in MR-guided ablation is 1.3 seconds, which means delayed imaging for human at an average reaction time: 0.33 seconds. The delayed imaging greatly limits the accuracy of puncture and ablation, and results in puncture injury or incomplete ablation. To overcome delayed imaging and obtain real-time imaging, the study was performed using a 1.0-T whole-body open configuration MR scanner in the livers of 10 Wuzhishan pigs. A respiratory-triggered liver matrix array was explored to guide and monitor microwave ablation in real-time. We successfully performed the entire ablation procedure under MR real-time guidance at 0.202 s, the fastest frame rate for each single image slice. The puncture time ranged from 23 min to 3 min. For the pigs, the mean puncture time was shorted to 4.75 minutes and the mean ablation time was 11.25 minutes at power 70 W. The mean length and widths were 4.62 ± 0.24 cm and 2.64 ± 0.13 cm, respectively. No complications or ablation related deaths during or after ablation were observed. In the current study, MR is able to guide microwave ablation like ultrasound in real-time guidance showing great potential for the treatment of liver tumors. PMID:26315365

  7. Real-time multiple human perception with color-depth cameras on a mobile robot.

    Science.gov (United States)

    Zhang, Hao; Reardon, Christopher; Parker, Lynne E

    2013-10-01

    The ability to perceive humans is an essential requirement for safe and efficient human-robot interaction. In real-world applications, the need for a robot to interact in real time with multiple humans in a dynamic, 3-D environment presents a significant challenge. The recent availability of commercial color-depth cameras allow for the creation of a system that makes use of the depth dimension, thus enabling a robot to observe its environment and perceive in the 3-D space. Here we present a system for 3-D multiple human perception in real time from a moving robot equipped with a color-depth camera and a consumer-grade computer. Our approach reduces computation time to achieve real-time performance through a unique combination of new ideas and established techniques. We remove the ground and ceiling planes from the 3-D point cloud input to separate candidate point clusters. We introduce the novel information concept, depth of interest, which we use to identify candidates for detection, and that avoids the computationally expensive scanning-window methods of other approaches. We utilize a cascade of detectors to distinguish humans from objects, in which we make intelligent reuse of intermediary features in successive detectors to improve computation. Because of the high computational cost of some methods, we represent our candidate tracking algorithm with a decision directed acyclic graph, which allows us to use the most computationally intense techniques only where necessary. We detail the successful implementation of our novel approach on a mobile robot and examine its performance in scenarios with real-world challenges, including occlusion, robot motion, nonupright humans, humans leaving and reentering the field of view (i.e., the reidentification challenge), human-object and human-human interaction. We conclude with the observation that the incorporation of the depth information, together with the use of modern techniques in new ways, we are able to create an

  8. Ultrasound probe and needle-guide calibration for robotic ultrasound scanning and needle targeting.

    Science.gov (United States)

    Kim, Chunwoo; Chang, Doyoung; Petrisor, Doru; Chirikjian, Gregory; Han, Misop; Stoianovici, Dan

    2013-06-01

    Image-to-robot registration is a typical step for robotic image-guided interventions. If the imaging device uses a portable imaging probe that is held by a robot, this registration is constant and has been commonly named probe calibration. The same applies to probes tracked by a position measurement device. We report a calibration method for 2-D ultrasound probes using robotic manipulation and a planar calibration rig. Moreover, a needle guide that is attached to the probe is also calibrated for ultrasound-guided needle targeting. The method is applied to a transrectal ultrasound (TRUS) probe for robot-assisted prostate biopsy. Validation experiments include TRUS-guided needle targeting accuracy tests. This paper outlines the entire process from the calibration to image-guided targeting. Freehand TRUS-guided prostate biopsy is the primary method of diagnosing prostate cancer, with over 1.2 million procedures performed annually in the U.S. alone. However, freehand biopsy is a highly challenging procedure with subjective quality control. As such, biopsy devices are emerging to assist the physician. Here, we present a method that uses robotic TRUS manipulation. A 2-D TRUS probe is supported by a 4-degree-of-freedom robot. The robot performs ultrasound scanning, enabling 3-D reconstructions. Based on the images, the robot orients a needle guide on target for biopsy. The biopsy is acquired manually through the guide. In vitro tests showed that the 3-D images were geometrically accurate, and an image-based needle targeting accuracy was 1.55 mm. These validate the probe calibration presented and the overall robotic system for needle targeting. Targeting accuracy is sufficient for targeting small, clinically significant prostatic cancer lesions, but actual in vivo targeting will include additional error components that will have to be determined.

  9. Augmented environments for the targeting of hepatic lesions during image-guided robotic liver surgery.

    Science.gov (United States)

    Buchs, Nicolas C; Volonte, Francesco; Pugin, François; Toso, Christian; Fusaglia, Matteo; Gavaghan, Kate; Majno, Pietro E; Peterhans, Matthias; Weber, Stefan; Morel, Philippe

    2013-10-01

    Stereotactic navigation technology can enhance guidance during surgery and enable the precise reproduction of planned surgical strategies. Currently, specific systems (such as the CAS-One system) are available for instrument guidance in open liver surgery. This study aims to evaluate the implementation of such a system for the targeting of hepatic tumors during robotic liver surgery. Optical tracking references were attached to one of the robotic instruments and to the robotic endoscopic camera. After instrument and video calibration and patient-to-image registration, a virtual model of the tracked instrument and the available three-dimensional images of the liver were displayed directly within the robotic console, superimposed onto the endoscopic video image. An additional superimposed targeting viewer allowed for the visualization of the target tumor, relative to the tip of the instrument, for an assessment of the distance between the tumor and the tool for the realization of safe resection margins. Two cirrhotic patients underwent robotic navigated atypical hepatic resections for hepatocellular carcinoma. The augmented endoscopic view allowed for the definition of an accurate resection margin around the tumor. The overlay of reconstructed three-dimensional models was also used during parenchymal transection for the identification of vascular and biliary structures. Operative times were 240 min in the first case and 300 min in the second. There were no intraoperative complications. The da Vinci Surgical System provided an excellent platform for image-guided liver surgery with a stable optic and instrumentation. Robotic image guidance might improve the surgeon's orientation during the operation and increase accuracy in tumor resection. Further developments of this technological combination are needed to deal with organ deformation during surgery. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. Skinware 2.0: A real-time middleware for robot skin

    Directory of Open Access Journals (Sweden)

    S. Youssefi

    2015-12-01

    Full Text Available Robot skins have emerged recently as products of research from various institutes worldwide. Each robot skin is designed with different applications in mind. As a result, they differ in many aspects from transduction technology and structure to communication protocols and timing requirements. These differences create a barrier for researchers interested in developing tactile processing algorithms for robots using the sense of touch; supporting multiple robot skin technologies is non-trivial and committing to a single technology is not as useful, especially as the field is still in its infancy. The Skinware middleware has been created to mitigate these issues by providing abstractions and real-time acquisition mechanisms. This article describes the second revision of Skinware, discussing the differences with respect to the first version.

  11. Long-Range Untethered Real-Time Live Gas Main Robotic Inspection System

    Energy Technology Data Exchange (ETDEWEB)

    Hagen Schempf; Daphne D' Zurko

    2004-10-31

    Under funding from the Department of Energy (DOE) and the Northeast Gas Association (NGA), Carnegie Mellon University (CMU) developed an untethered, wireless remote controlled inspection robot dubbed Explorer. The project entailed the design and prototyping of a wireless self-powered video-inspection robot capable of accessing live 6- and 8-inch diameter cast-iron and steel mains, while traversing turns and Ts and elbows under real-time control with live video feedback to an operator. The design is that of a segmented actively articulated and wheel-leg powered robot design, with fisheye imaging capability and self-powered battery storage and wireless real-time communication link. The prototype was functionally tested in an above ground pipe-network, in order to debug all mechanical, electrical and software subsystems, and develop the necessary deployment and retrieval, as well as obstacle-handling scripts. A pressurized natural gas test-section was used to certify it for operation in natural gas at up to 60 psig. Two subsequent live-main field-trials in both cast-iron and steel pipe, demonstrated its ability to be safely launched, operated and retrieved under real-world conditions. The system's ability to safely and repeatably exidrecover from angled and vertical launchers, traverse multi-thousand foot long pipe-sections, make T and varied-angle elbow-turns while wirelessly sending live video and handling command and control messages, was clearly demonstrated. Video-inspection was clearly shown to be a viable tool to understand the state of this critical buried infrastructure, irrespective of low- (cast-iron) or high-pressure (steel) conditions. This report covers the different aspects of specifications, requirements, design, prototyping, integration and testing and field-trialing of the Explorer platform.

  12. Image-guided robotic radiosurgery for spinal metastases

    International Nuclear Information System (INIS)

    Gibbs, Iris C.; Kamnerdsupaphon, Pimkhuan; Ryu, Mi-Ryeong; Dodd, Robert; Kiernan, Michaela; Chang, Steven D.; Adler, John R.

    2007-01-01

    Background and Purpose: To determine the effectiveness and safety of image-guided robotic radiosurgery for spinal metastases. Materials/Methods: From 1996 to 2005, 74 patients with 102 spinal metastases were treated using the CyberKnife TM at Stanford University. Sixty-two (84%) patients were symptomatic. Seventy-four percent (50/68) of previously treated patients had prior radiation. Using the CyberKnife TM , 16-25 Gy in 1-5 fractions was delivered. Patients were followed clinically and radiographically for at least 3 months or until death. Results: With mean follow-up of 9 months (range 0-33 months), 36 patients were alive and 38 were dead at last follow-up. No death was treatment related. Eighty-four (84%) percent of symptomatic patients experienced improvement or resolution of symptoms after treatment. Three patients developed treatment-related spinal injury. Analysis of dose-volume parameters and clinical parameters failed to identify predictors of spinal cord injury. Conclusions: Robotic radiosurgery is effective and generally safe for spinal metastases even in previously irradiated patients

  13. Real-Time Motion Planning and Safe Navigation in Dynamic Multi-Robot Environments

    National Research Council Canada - National Science Library

    Bruce, James R

    2006-01-01

    .... While motion planning has been used for high level robot navigation, or limited to semi-static or single-robot domains, it has often been dismissed for the real-time low-level control of agents due...

  14. A Novel Bioinspired Vision System: A Step toward Real-Time Human-Robot Interactions

    Directory of Open Access Journals (Sweden)

    Abdul Rahman Hafiz

    2011-01-01

    Full Text Available Building a human-like robot that could be involved in our daily lives is a dream of many scientists. Achieving a sophisticated robot's vision system, which can enhance the robot's real-time interaction ability with the human, is one of the main keys toward realizing such an autonomous robot. In this work, we are suggesting a bioinspired vision system that helps to develop an advanced human-robot interaction in an autonomous humanoid robot. First, we enhance the robot's vision accuracy online by applying a novel dynamic edge detection algorithm abstracted from the rules that the horizontal cells play in the mammalian retina. Second, in order to support the first algorithm, we improve the robot's tracking ability by designing a variant photoreceptors distribution corresponding to what exists in the human vision system. The experimental results verified the validity of the model. The robot could have a clear vision in real time and build a mental map that assisted it to be aware of the frontal users and to develop a positive interaction with them.

  15. Registration of angiographic image on real-time fluoroscopic image for image-guided percutaneous coronary intervention.

    Science.gov (United States)

    Kim, Dongkue; Park, Sangsoo; Jeong, Myung Ho; Ryu, Jeha

    2018-02-01

    In percutaneous coronary intervention (PCI), cardiologists must study two different X-ray image sources: a fluoroscopic image and an angiogram. Manipulating a guidewire while alternately monitoring the two separate images on separate screens requires a deep understanding of the anatomy of coronary vessels and substantial training. We propose 2D/2D spatiotemporal image registration of the two images in a single image in order to provide cardiologists with enhanced visual guidance in PCI. The proposed 2D/2D spatiotemporal registration method uses a cross-correlation of two ECG series in each image to temporally synchronize two separate images and register an angiographic image onto the fluoroscopic image. A guidewire centerline is then extracted from the fluoroscopic image in real time, and the alignment of the centerline with vessel outlines of the chosen angiographic image is optimized using the iterative closest point algorithm for spatial registration. A proof-of-concept evaluation with a phantom coronary vessel model with engineering students showed an error reduction rate greater than 74% on wrong insertion to nontarget branches compared to the non-registration method and more than 47% reduction in the task completion time in performing guidewire manipulation for very difficult tasks. Evaluation with a small number of experienced doctors shows a potentially significant reduction in both task completion time and error rate for difficult tasks. The total registration time with real procedure X-ray (angiographic and fluoroscopic) images takes [Formula: see text] 60 ms, which is within the fluoroscopic image acquisition rate of 15 Hz. By providing cardiologists with better visual guidance in PCI, the proposed spatiotemporal image registration method is shown to be useful in advancing the guidewire to the coronary vessel branches, especially those difficult to insert into.

  16. Robot Mapping With Real-Time Incremental Localization Using Expectation Maximization

    National Research Council Canada - National Science Library

    Owens, Kevin L

    2005-01-01

    This research effort explores and develops a real-time sonar-based robot mapping and localization algorithm that provides pose correction within the context of a singe room, to be combined with pre...

  17. First Demonstration of Combined kV/MV Image-Guided Real-Time Dynamic Multileaf-Collimator Target Tracking

    International Nuclear Information System (INIS)

    Cho, Byungchul; Poulsen, Per R.; Sloutsky, Alex; Sawant, Amit; Keall, Paul J.

    2009-01-01

    Purpose: For intrafraction motion management, a real-time tracking system was developed by combining fiducial marker-based tracking via simultaneous kilovoltage (kV) and megavoltage (MV) imaging and a dynamic multileaf collimator (DMLC) beam-tracking system. Methods and Materials: The integrated tracking system employed a Varian Trilogy system equipped with kV/MV imaging systems and a Millennium 120-leaf MLC. A gold marker in elliptical motion (2-cm superior-inferior, 1-cm left-right, 10 cycles/min) was simultaneously imaged by the kV and MV imagers at 6.7 Hz and segmented in real time. With these two-dimensional projections, the tracking software triangulated the three-dimensional marker position and repositioned the MLC leaves to follow the motion. Phantom studies were performed to evaluate time delay from image acquisition to MLC adjustment, tracking error, and dosimetric impact of target motion with and without tracking. Results: The time delay of the integrated tracking system was ∼450 ms. The tracking error using a prediction algorithm was 0.9 ± 0.5 mm for the elliptical motion. The dose distribution with tracking showed better target coverage and less dose to surrounding region over no tracking. The failure rate of the gamma test (3%/3-mm criteria) was 22.5% without tracking but was reduced to 0.2% with tracking. Conclusion: For the first time, a complete tracking system combining kV/MV image-guided target tracking and DMLC beam tracking was demonstrated. The average geometric error was less than 1 mm, and the dosimetric error was negligible. This system is a promising method for intrafraction motion management.

  18. Development of radiation hardened robot for nuclear facility - Development of real-time stereo object tracking system using the optical correlator

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Eun Soo; Lee, S. H.; Lee, J. S. [Kwangwoon University, Seoul (Korea)

    2000-03-01

    Object tracking, through Centroide method used in the KAERI-M1 Stereo Robot Vision System developed at Atomic Research Center, is too sensitive to target's light variation and because it has a fragility which can't reflect the surrounding background, the application in the actual condition is very limited. Also the correlation method can constitute a relatively stable object tracker in noise features but the digital calculation amount is too massive in image correlation so real time materialization is limited. So the development of Optical Correlation based on Stereo Object Tracking System using high speed optical information processing technique will put stable the real time stereo object tracking system and substantial atomic industrial stereo robot vision system to practical use. This research is about developing real time stereo object tracking algorithm using optical correlation system through the technique which can be applied to Atomic Research Center's KAERI-M1 Stereo Vision Robot which will be used in atomic facility remote operations. And revise the stereo disparity using real time optical correlation technique, and materializing the application of the stereo object tracking algorithm to KAERI-M1 Stereo Robot. 19 refs., 45 figs., 2 tabs. (Author)

  19. SU-E-J-12: An Image-Guided Soft Robotic Patient Positioning System for Maskless Head-And-Neck Cancer Radiotherapy: A Proof-Of-Concept Study

    International Nuclear Information System (INIS)

    Ogunmolu, O; Gans, N; Jiang, S; Gu, X

    2015-01-01

    Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance of the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control

  20. SU-E-J-12: An Image-Guided Soft Robotic Patient Positioning System for Maskless Head-And-Neck Cancer Radiotherapy: A Proof-Of-Concept Study

    Energy Technology Data Exchange (ETDEWEB)

    Ogunmolu, O; Gans, N [The University of Texas at Dallas, Richardson, TX (United States); Jiang, S; Gu, X [UT Southwestern Medical Center, Dallas, TX (United States)

    2015-06-15

    Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance of the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control

  1. Hybrid Approach for Biliary Interventions Employing MRI-Guided Bile Duct Puncture with Near-Real-Time Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Wybranski, Christian, E-mail: Christian.Wybranski@uk-koeln.de [University Hospital of Cologne, Department of Diagnostic and Interventional Radiology (Germany); Pech, Maciej [Otto-von-Guericke University Medical School, Department of Radiology and Nuclear Medicine (Germany); Lux, Anke [Otto-von-Guericke University Medical School, Institute of Biometry and Medical Informatics (Germany); Ricke, Jens; Fischbach, Frank; Fischbach, Katharina [Otto-von-Guericke University Medical School, Department of Radiology and Nuclear Medicine (Germany)

    2017-06-15

    ObjectiveTo assess the feasibility of a hybrid approach employing MRI-guided bile duct (BD) puncture for subsequent fluoroscopy-guided biliary interventions in patients with non-dilated (≤3 mm) or dilated BD (≥3 mm) but unfavorable conditions for ultrasonography (US)-guided BD puncture.MethodsA total of 23 hybrid interventions were performed in 21 patients. Visualization of BD and puncture needles (PN) in the interventional MR images was rated on a 5-point Likert scale by two radiologists. Technical success, planning time, BD puncture time and positioning adjustments of the PN as well as technical success of the biliary intervention and complication rate were recorded.ResultsVisualization even of third-order non-dilated BD and PN was rated excellent by both radiologists with good to excellent interrater agreement. MRI-guided BD puncture was successful in all cases. Planning and BD puncture times were 1:36 ± 2.13 (0:16–11:07) min. and 3:58 ± 2:35 (1:11–9:32) min. Positioning adjustments of the PN was necessary in two patients. Repeated capsular puncture was not necessary in any case. All biliary interventions were completed successfully without major complications.ConclusionA hybrid approach which employs MRI-guided BD puncture for subsequent fluoroscopy-guided biliary intervention is feasible in clinical routine and yields high technical success in patients with non-dilated BD and/or unfavorable conditions for US-guided puncture. Excellent visualization of BD and PN in near-real-time interventional MRI allows successful cannulation of the BD.

  2. An ultrasound-driven needle-insertion robot for percutaneous cholecystostomy

    International Nuclear Information System (INIS)

    Hong, J; Dohi, T; Hashizume, M; Konishi, K; Hata, N

    2004-01-01

    A real-time ultrasound-guided needle-insertion medical robot for percutaneous cholecystostomy has been developed. Image-guided interventions have become widely accepted because they are consistent with minimal invasiveness. However, organ or abnormality displacement due to involuntary patient motion may undesirably affect the intervention. The proposed instrument uses intraoperative images and modifies the needle path in real time by using a novel ultrasonic image segmentation technique. In phantom and volunteer experiments, the needle path updating time was 130 and 301 ms per cycle, respectively. In animal experiments, the needle could be placed accurately in the target

  3. Intelligent lead: a novel HRI sensor for guide robots.

    Science.gov (United States)

    Cho, Keum-Bae; Lee, Beom-Hee

    2012-01-01

    This paper addresses the introduction of a new Human Robot Interaction (HRI) sensor for guide robots. Guide robots for geriatric patients or the visually impaired should follow user's control command, keeping a certain desired distance allowing the user to work freely. Therefore, it is necessary to acquire control commands and a user's position on a real-time basis. We suggest a new sensor fusion system to achieve this objective and we will call this sensor the "intelligent lead". The objective of the intelligent lead is to acquire a stable distance from the user to the robot, speed-control volume and turn-control volume, even when the robot platform with the intelligent lead is shaken on uneven ground. In this paper we explain a precise Extended Kalman Filter (EKF) procedure for this. The intelligent lead physically consists of a Kinect sensor, the serial linkage attached with eight rotary encoders, and an IMU (Inertial Measurement Unit) and their measurements are fused by the EKF. A mobile robot was designed to test the performance of the proposed sensor system. After installing the intelligent lead in the mobile robot, several tests are conducted to verify that the mobile robot with the intelligent lead is capable of achieving its goal points while maintaining the appropriate distance between the robot and the user. The results show that we can use the intelligent lead proposed in this paper as a new HRI sensor joined a joystick and a distance measure in the mobile environments such as the robot and the user are moving at the same time.

  4. Piezoelectrically Actuated Robotic System for MRI-Guided Prostate Percutaneous Therapy

    Science.gov (United States)

    Su, Hao; Shang, Weijian; Cole, Gregory; Li, Gang; Harrington, Kevin; Camilo, Alexander; Tokuda, Junichi; Tempany, Clare M.; Hata, Nobuhiko; Fischer, Gregory S.

    2014-01-01

    This paper presents a fully-actuated robotic system for percutaneous prostate therapy under continuously acquired live magnetic resonance imaging (MRI) guidance. The system is composed of modular hardware and software to support the surgical workflow of intra-operative MRI-guided surgical procedures. We present the development of a 6-degree-of-freedom (DOF) needle placement robot for transperineal prostate interventions. The robot consists of a 3-DOF needle driver module and a 3-DOF Cartesian motion module. The needle driver provides needle cannula translation and rotation (2-DOF) and stylet translation (1-DOF). A custom robot controller consisting of multiple piezoelectric motor drivers provides precision closed-loop control of piezoelectric motors and enables simultaneous robot motion and MR imaging. The developed modular robot control interface software performs image-based registration, kinematics calculation, and exchanges robot commands and coordinates between the navigation software and the robot controller with a new implementation of the open network communication protocol OpenIGTLink. Comprehensive compatibility of the robot is evaluated inside a 3-Tesla MRI scanner using standard imaging sequences and the signal-to-noise ratio (SNR) loss is limited to 15%. The image deterioration due to the present and motion of robot demonstrates unobservable image interference. Twenty-five targeted needle placements inside gelatin phantoms utilizing an 18-gauge ceramic needle demonstrated 0.87 mm root mean square (RMS) error in 3D Euclidean distance based on MRI volume segmentation of the image-guided robotic needle placement procedure. PMID:26412962

  5. Augmented reality during robot-assisted laparoscopic partial nephrectomy: toward real-time 3D-CT to stereoscopic video registration.

    Science.gov (United States)

    Su, Li-Ming; Vagvolgyi, Balazs P; Agarwal, Rahul; Reiley, Carol E; Taylor, Russell H; Hager, Gregory D

    2009-04-01

    To investigate a markerless tracking system for real-time stereo-endoscopic visualization of preoperative computed tomographic imaging as an augmented display during robot-assisted laparoscopic partial nephrectomy. Stereoscopic video segments of a patient undergoing robot-assisted laparoscopic partial nephrectomy for tumor and another for a partial staghorn renal calculus were processed to evaluate the performance of a three-dimensional (3D)-to-3D registration algorithm. After both cases, we registered a segment of the video recording to the corresponding preoperative 3D-computed tomography image. After calibrating the camera and overlay, 3D-to-3D registration was created between the model and the surgical recording using a modified iterative closest point technique. Image-based tracking technology tracked selected fixed points on the kidney surface to augment the image-to-model registration. Our investigation has demonstrated that we can identify and track the kidney surface in real time when applied to intraoperative video recordings and overlay the 3D models of the kidney, tumor (or stone), and collecting system semitransparently. Using a basic computer research platform, we achieved an update rate of 10 Hz and an overlay latency of 4 frames. The accuracy of the 3D registration was 1 mm. Augmented reality overlay of reconstructed 3D-computed tomography images onto real-time stereo video footage is possible using iterative closest point and image-based surface tracking technology that does not use external navigation tracking systems or preplaced surface markers. Additional studies are needed to assess the precision and to achieve fully automated registration and display for intraoperative use.

  6. An overview on real-time control schemes for wheeled mobile robot

    Science.gov (United States)

    Radzak, M. S. A.; Ali, M. A. H.; Sha’amri, S.; Azwan, A. R.

    2018-04-01

    The purpose of this paper is to review real-time control motion algorithms for wheeled mobile robot (WMR) when navigating in environment such as road. Its need a good controller to avoid collision with any disturbance and maintain a track error at zero level. The controllers are used with other aiding sensors to measure the WMR’s velocities, posture, and interference to estimate the required torque to be applied on the wheels of mobile robot. Four main categories for wheeled mobile robot control systems have been found in literature which are namely: Kinematic based controller, Dynamic based controllers, artificial intelligence based control system, and Active Force control. A MATLAB/Simulink software is the main software to simulate and implement the control system. The real-time toolbox in MATLAB/SIMULINK are used to receive/send data from sensors/to actuator with presence of disturbances, however others software such C, C++ and visual basic are rare to be used.

  7. Real-time 3D-surface-guided head refixation useful for fractionated stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Li Shidong; Liu Dezhi; Yin Gongjie; Zhuang Ping; Geng, Jason

    2006-01-01

    Accurate and precise head refixation in fractionated stereotactic radiotherapy has been achieved through alignment of real-time 3D-surface images with a reference surface image. The reference surface image is either a 3D optical surface image taken at simulation with the desired treatment position, or a CT/MRI-surface rendering in the treatment plan with corrections for patient motion during CT/MRI scans and partial volume effects. The real-time 3D surface images are rapidly captured by using a 3D video camera mounted on the ceiling of the treatment vault. Any facial expression such as mouth opening that affects surface shape and location can be avoided using a new facial monitoring technique. The image artifacts on the real-time surface can generally be removed by setting a threshold of jumps at the neighboring points while preserving detailed features of the surface of interest. Such a real-time surface image, registered in the treatment machine coordinate system, provides a reliable representation of the patient head position during the treatment. A fast automatic alignment between the real-time surface and the reference surface using a modified iterative-closest-point method leads to an efficient and robust surface-guided target refixation. Experimental and clinical results demonstrate the excellent efficacy of <2 min set-up time, the desired accuracy and precision of <1 mm in isocenter shifts, and <1 deg. in rotation

  8. The Development and Real-World Deployment of FROG, the Fun Robotic Outdoor Guide

    NARCIS (Netherlands)

    Evers, Vanessa; Menezes, Nuno; Merino, Luis; Gavrilla, Dariu; Nabais, Fernando; Pantic, Maja; Alvito, Paulo; Karreman, Daphne Eleonora

    2014-01-01

    This video details the development of an intelligent outdoor Guide robot. The main objective is to deploy an innovative robotic guide which is not only able to show information, but to react to the affective states of the users, and to offer location-based services using augmented reality. The

  9. Magnetic Particle / Magnetic Resonance Imaging: In-Vitro MPI-Guided Real Time Catheter Tracking and 4D Angioplasty Using a Road Map and Blood Pool Tracer Approach.

    Science.gov (United States)

    Salamon, Johannes; Hofmann, Martin; Jung, Caroline; Kaul, Michael Gerhard; Werner, Franziska; Them, Kolja; Reimer, Rudolph; Nielsen, Peter; Vom Scheidt, Annika; Adam, Gerhard; Knopp, Tobias; Ittrich, Harald

    2016-01-01

    In-vitro evaluation of the feasibility of 4D real time tracking of endovascular devices and stenosis treatment with a magnetic particle imaging (MPI) / magnetic resonance imaging (MRI) road map approach and an MPI-guided approach using a blood pool tracer. A guide wire and angioplasty-catheter were labeled with a thin layer of magnetic lacquer. For real time MPI a custom made software framework was developed. A stenotic vessel phantom filled with saline or superparamagnetic iron oxide nanoparticles (MM4) was equipped with bimodal fiducial markers for co-registration in preclinical 7T MRI and MPI. In-vitro angioplasty was performed inflating the balloon with saline or MM4. MPI data were acquired using a field of view of 37.3×37.3×18.6 mm3 and a frame rate of 46 volumes/sec. Analysis of the magnetic lacquer-marks on the devices were performed with electron microscopy, atomic absorption spectrometry and micro-computed tomography. Magnetic marks allowed for MPI/MRI guidance of interventional devices. Bimodal fiducial markers enable MPI/MRI image fusion for MRI based roadmapping. MRI roadmapping and the blood pool tracer approach facilitate MPI real time monitoring of in-vitro angioplasty. Successful angioplasty was verified with MPI and MRI. Magnetic marks consist of micrometer sized ferromagnetic plates mainly composed of iron and iron oxide. 4D real time MP imaging, tracking and guiding of endovascular instruments and in-vitro angioplasty is feasible. In addition to an approach that requires a blood pool tracer, MRI based roadmapping might emerge as a promising tool for radiation free 4D MPI-guided interventions.

  10. Magnetic Particle / Magnetic Resonance Imaging: In-Vitro MPI-Guided Real Time Catheter Tracking and 4D Angioplasty Using a Road Map and Blood Pool Tracer Approach.

    Directory of Open Access Journals (Sweden)

    Johannes Salamon

    Full Text Available In-vitro evaluation of the feasibility of 4D real time tracking of endovascular devices and stenosis treatment with a magnetic particle imaging (MPI / magnetic resonance imaging (MRI road map approach and an MPI-guided approach using a blood pool tracer.A guide wire and angioplasty-catheter were labeled with a thin layer of magnetic lacquer. For real time MPI a custom made software framework was developed. A stenotic vessel phantom filled with saline or superparamagnetic iron oxide nanoparticles (MM4 was equipped with bimodal fiducial markers for co-registration in preclinical 7T MRI and MPI. In-vitro angioplasty was performed inflating the balloon with saline or MM4. MPI data were acquired using a field of view of 37.3×37.3×18.6 mm3 and a frame rate of 46 volumes/sec. Analysis of the magnetic lacquer-marks on the devices were performed with electron microscopy, atomic absorption spectrometry and micro-computed tomography.Magnetic marks allowed for MPI/MRI guidance of interventional devices. Bimodal fiducial markers enable MPI/MRI image fusion for MRI based roadmapping. MRI roadmapping and the blood pool tracer approach facilitate MPI real time monitoring of in-vitro angioplasty. Successful angioplasty was verified with MPI and MRI. Magnetic marks consist of micrometer sized ferromagnetic plates mainly composed of iron and iron oxide.4D real time MP imaging, tracking and guiding of endovascular instruments and in-vitro angioplasty is feasible. In addition to an approach that requires a blood pool tracer, MRI based roadmapping might emerge as a promising tool for radiation free 4D MPI-guided interventions.

  11. Real-time networked control of an industrial robot manipulator via discrete-time second-order sliding modes

    Science.gov (United States)

    Massimiliano Capisani, Luca; Facchinetti, Tullio; Ferrara, Antonella

    2010-08-01

    This article presents the networked control of a robotic anthropomorphic manipulator based on a second-order sliding mode technique, where the control objective is to track a desired trajectory for the manipulator. The adopted control scheme allows an easy and effective distribution of the control algorithm over two networked machines. While the predictability of real-time tasks execution is achieved by the Soft Hard Real-Time Kernel (S.Ha.R.K.) real-time operating system, the communication is established via a standard Ethernet network. The performances of the control system are evaluated under different experimental system configurations using, to perform the experiments, a COMAU SMART3-S2 industrial robot, and the results are analysed to put into evidence the robustness of the proposed approach against possible network delays, packet losses and unmodelled effects.

  12. Hardware Approach for Real Time Machine Stereo Vision

    Directory of Open Access Journals (Sweden)

    Michael Tornow

    2006-02-01

    Full Text Available Image processing is an effective tool for the analysis of optical sensor information for driver assistance systems and controlling of autonomous robots. Algorithms for image processing are often very complex and costly in terms of computation. In robotics and driver assistance systems, real-time processing is necessary. Signal processing algorithms must often be drastically modified so they can be implemented in the hardware. This task is especially difficult for continuous real-time processing at high speeds. This article describes a hardware-software co-design for a multi-object position sensor based on a stereophotogrammetric measuring method. In order to cover a large measuring area, an optimized algorithm based on an image pyramid is implemented in an FPGA as a parallel hardware solution for depth map calculation. Object recognition and tracking are then executed in real-time in a processor with help of software. For this task a statistical cluster method is used. Stabilization of the tracking is realized through use of a Kalman filter. Keywords: stereophotogrammetry, hardware-software co-design, FPGA, 3-d image analysis, real-time, clustering and tracking.

  13. Towards frameless maskless SRS through real-time 6DoF robotic motion compensation

    Science.gov (United States)

    Belcher, Andrew H.; Liu, Xinmin; Chmura, Steven; Yenice, Kamil; Wiersma, Rodney D.

    2017-12-01

    Stereotactic radiosurgery (SRS) uses precise dose placement to treat conditions of the CNS. Frame-based SRS uses a metal head ring fixed to the patient’s skull to provide high treatment accuracy, but patient comfort and clinical workflow may suffer. Frameless SRS, while potentially more convenient, may increase uncertainty of treatment accuracy and be physiologically confining to some patients. By incorporating highly precise robotics and advanced software algorithms into frameless treatments, we present a novel frameless and maskless SRS system where a robot provides real-time 6DoF head motion stabilization allowing positional accuracies to match or exceed those of traditional frame-based SRS. A 6DoF parallel kinematics robot was developed and integrated with a real-time infrared camera in a closed loop configuration. A novel compensation algorithm was developed based on an iterative closest-path correction approach. The robotic SRS system was tested on six volunteers, whose motion was monitored and compensated for in real-time over 15 min simulated treatments. The system’s effectiveness in maintaining the target’s 6DoF position within preset thresholds was determined by comparing volunteer head motion with and without compensation. Comparing corrected and uncorrected motion, the 6DoF robotic system showed an overall improvement factor of 21 in terms of maintaining target position within 0.5 mm and 0.5 degree thresholds. Although the system’s effectiveness varied among the volunteers examined, for all volunteers tested the target position remained within the preset tolerances 99.0% of the time when robotic stabilization was used, compared to 4.7% without robotic stabilization. The pre-clinical robotic SRS compensation system was found to be effective at responding to sub-millimeter and sub-degree cranial motions for all volunteers examined. The system’s success with volunteers has demonstrated its capability for implementation with frameless and

  14. Towards frameless maskless SRS through real-time 6DoF robotic motion compensation.

    Science.gov (United States)

    Belcher, Andrew H; Liu, Xinmin; Chmura, Steven; Yenice, Kamil; Wiersma, Rodney D

    2017-11-13

    Stereotactic radiosurgery (SRS) uses precise dose placement to treat conditions of the CNS. Frame-based SRS uses a metal head ring fixed to the patient's skull to provide high treatment accuracy, but patient comfort and clinical workflow may suffer. Frameless SRS, while potentially more convenient, may increase uncertainty of treatment accuracy and be physiologically confining to some patients. By incorporating highly precise robotics and advanced software algorithms into frameless treatments, we present a novel frameless and maskless SRS system where a robot provides real-time 6DoF head motion stabilization allowing positional accuracies to match or exceed those of traditional frame-based SRS. A 6DoF parallel kinematics robot was developed and integrated with a real-time infrared camera in a closed loop configuration. A novel compensation algorithm was developed based on an iterative closest-path correction approach. The robotic SRS system was tested on six volunteers, whose motion was monitored and compensated for in real-time over 15 min simulated treatments. The system's effectiveness in maintaining the target's 6DoF position within preset thresholds was determined by comparing volunteer head motion with and without compensation. Comparing corrected and uncorrected motion, the 6DoF robotic system showed an overall improvement factor of 21 in terms of maintaining target position within 0.5 mm and 0.5 degree thresholds. Although the system's effectiveness varied among the volunteers examined, for all volunteers tested the target position remained within the preset tolerances 99.0% of the time when robotic stabilization was used, compared to 4.7% without robotic stabilization. The pre-clinical robotic SRS compensation system was found to be effective at responding to sub-millimeter and sub-degree cranial motions for all volunteers examined. The system's success with volunteers has demonstrated its capability for implementation with frameless and maskless SRS

  15. Combination of Robot Simulation with Real-time Monitoring and Control

    Directory of Open Access Journals (Sweden)

    Jianyu YANG

    2014-08-01

    Full Text Available The paper mainly focuses in combining virtual reality based operation simulation with remote real-time monitoring and control method for an experimental robot. A system composition framework was designed and relative arm-wheel experimental robot platform was also built. Virtual robots and two virtual environments were developed. To locate the virtual robot within numerical environments, relative mathematical methods is also discussed, including analytic locating methods for linear motion and self-rotation, as well as linear transformation method with homogeneous matrices for turning motion, in order to decrease division calculations. Several experiments were carried out, trajectory errors were found because of relative slides between the wheel and the floor, during the locating experiments. Writing-monitoring experiments were also performed by programming the robotic arm to write a Chinese character, and the virtual robot in monitoring terminal perfectly followed all the movements. All the experiment results confirmed that virtual environment can not only be used as a good supplement to the traditional video monitoring method, but also offer better control experience during the operation.

  16. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    OpenAIRE

    Richard Chiou; Yongjin (james) Kwon; Tzu-Liang (bill) Tseng; Robin Kizirian; Yueh-Ting Yang

    2010-01-01

    This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote c...

  17. CHIMERA II - A real-time multiprocessing environment for sensor-based robot control

    Science.gov (United States)

    Stewart, David B.; Schmitz, Donald E.; Khosla, Pradeep K.

    1989-01-01

    A multiprocessing environment for a wide variety of sensor-based robot system, providing the flexibility, performance, and UNIX-compatible interface needed for fast development of real-time code is addressed. The requirements imposed on the design of a programming environment for sensor-based robotic control is outlined. The details of the current hardware configuration are presented, along with the details of the CHIMERA II software. Emphasis is placed on the kernel, low-level interboard communication, user interface, extended file system, user-definable and dynamically selectable real-time schedulers, remote process synchronization, and generalized interprocess communication. A possible implementation of a hierarchical control model, the NASA/NBS standard reference model for telerobot control system is demonstrated.

  18. A fiducial detection algorithm for real-time image guided IMRT based on simultaneous MV and kV imaging.

    Science.gov (United States)

    Mao, Weihua; Riaz, Nadeem; Lee, Louis; Wiersma, Rodney; Xing, Lei

    2008-08-01

    The advantage of highly conformal dose techniques such as 3DCRT and IMRT is limited by intrafraction organ motion. A new approach to gain near real-time 3D positions of internally implanted fiducial markers is to analyze simultaneous onboard kV beam and treatment MV beam images (from fluoroscopic or electronic portal image devices). Before we can use this real-time image guidance for clinical 3DCRT and IMRT treatments, four outstanding issues need to be addressed. (1) How will fiducial motion blur the image and hinder tracking fiducials? kV and MV images are acquired while the tumor is moving at various speeds. We find that a fiducial can be successfully detected at a maximum linear speed of 1.6 cm/s. (2) How does MV beam scattering affect kV imaging? We investigate this by varying MV field size and kV source to imager distance, and find that common treatment MV beams do not hinder fiducial detection in simultaneous kV images. (3) How can one detect fiducials on images from 3DCRT and IMRT treatment beams when the MV fields are modified by a multileaf collimator (MLC)? The presented analysis is capable of segmenting a MV field from the blocking MLC and detecting visible fiducials. This enables the calculation of nearly real-time 3D positions of markers during a real treatment. (4) Is the analysis fast enough to track fiducials in nearly real time? Multiple methods are adopted to predict marker positions and reduce search regions. The average detection time per frame for three markers in a 1024 x 768 image was reduced to 0.1 s or less. Solving these four issues paves the way to tracking moving fiducial markers throughout a 3DCRT or IMRT treatment. Altogether, these four studies demonstrate that our algorithm can track fiducials in real time, on degraded kV images (MV scatter), in rapidly moving tumors (fiducial blurring), and even provide useful information in the case when some fiducials are blocked from view by the MLC. This technique can provide a gating signal or

  19. Review of Real-Time 3-Dimensional Image Guided Radiation Therapy on Standard-Equipped Cancer Radiation Therapy Systems: Are We at the Tipping Point for the Era of Real-Time Radiation Therapy?

    Science.gov (United States)

    Keall, Paul J; Nguyen, Doan Trang; O'Brien, Ricky; Zhang, Pengpeng; Happersett, Laura; Bertholet, Jenny; Poulsen, Per R

    2018-04-14

    To review real-time 3-dimensional (3D) image guided radiation therapy (IGRT) on standard-equipped cancer radiation therapy systems, focusing on clinically implemented solutions. Three groups in 3 continents have clinically implemented novel real-time 3D IGRT solutions on standard-equipped linear accelerators. These technologies encompass kilovoltage, combined megavoltage-kilovoltage, and combined kilovoltage-optical imaging. The cancer sites treated span pelvic and abdominal tumors for which respiratory motion is present. For each method the 3D-measured motion during treatment is reported. After treatment, dose reconstruction was used to assess the treatment quality in the presence of motion with and without real-time 3D IGRT. The geometric accuracy was quantified through phantom experiments. A literature search was conducted to identify additional real-time 3D IGRT methods that could be clinically implemented in the near future. The real-time 3D IGRT methods were successfully clinically implemented and have been used to treat more than 200 patients. Systematic target position shifts were observed using all 3 methods. Dose reconstruction demonstrated that the delivered dose is closer to the planned dose with real-time 3D IGRT than without real-time 3D IGRT. In addition, compromised target dose coverage and variable normal tissue doses were found without real-time 3D IGRT. The geometric accuracy results with real-time 3D IGRT had a mean error of real-time 3D IGRT methods using standard-equipped radiation therapy systems that could also be clinically implemented. Multiple clinical implementations of real-time 3D IGRT on standard-equipped cancer radiation therapy systems have been demonstrated. Many more approaches that could be implemented were identified. These solutions provide a pathway for the broader adoption of methods to make radiation therapy more accurate, impacting tumor and normal tissue dose, margins, and ultimately patient outcomes. Copyright © 2018

  20. A CORBA-Based Control Architecture for Real-Time Teleoperation Tasks in a Developmental Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Hanafiah Yussof

    2011-06-01

    Full Text Available This paper presents the development of new Humanoid Robot Control Architecture (HRCA platform based on Common Object Request Broker Architecture (CORBA in a developmental biped humanoid robot for real-time teleoperation tasks. The objective is to make the control platform open for collaborative teleoperation research in humanoid robotics via the internet. Meanwhile, to generate optimal trajectory generation in bipedal walk, we proposed a real time generation of optimal gait by using Genetic Algorithms (GA to minimize the energy for humanoid robot gait. In addition, we proposed simplification of kinematical solutions to generate controlled trajectories of humanoid robot legs in teleoperation tasks. The proposed control systems and strategies was evaluated in teleoperation experiments between Australia and Japan using humanoid robot Bonten-Maru. Additionally, we have developed a user-friendly Virtual Reality (VR user interface that is composed of ultrasonic 3D mouse system and a Head Mounted Display (HMD for working coexistence of human and humanoid robot in teleoperation tasks. The teleoperation experiments show good performance of the proposed system and control, and also verified the good performance for working coexistence of human and humanoid robot.

  1. Development of image-guided operation system having integrated information of the patient for procedure of endoscopic surgery of digestive tracts

    International Nuclear Information System (INIS)

    Hattori, Asaki; Suzuki, Naoki; Tanoue, Kazuo; Ieiri, Satoshi; Konishi, Kozo; Tomikawa, Morimasa; Kenmotsu, Hajime; Hashizume, Makoto

    2010-01-01

    This study reports the development of patient's integrated information-displaying system at image-guided, robotic peroral endoscopic operation of digestive tracts as well as the actual operative field for the operator not to look aside. The peroral endoscope has, at its top, a magnetic position sensor and 2 robotic manipulative forceps at right and left side to navigate the surgery through following 3 windows of superimposing display: the inner peritoneal 3D structure of the real operative field reconstructed from preoperative CT and MR images by volume rendering, presentation of the robot top tip in the structure above and in the preoperative CT or MR image as an ordinary navigation. Furthermore, the robot has a function to measure softness of its grabbing tissue which is displayed in the corresponding right and left superimposing windows, and signs like the real-time blood pressure and heart rate are also given in another window. All of the patient's integrated information-displaying can be handled at will during the operation. Improvement of user interface and of navigation display is further to be conducted. (T.T.)

  2. Real-Time fusion of visual images and laser data images for safe navigation in outdoor environments

    OpenAIRE

    García-Alegre Sánchez, María C.; Martín, David; Guinea García-Alegre, Domingo M.; Guinea Díaz, Domingo

    2011-01-01

    [EN]In recent years, two dimensional laser range finders mounted on vehicles is becoming a fruitful solution to achieve safety and environment recognition requirements (Keicher & Seufert, 2000), (Stentz et al., 2002), (DARPA, 2007). They provide real-time accurate range measurements in large angular fields at a fixed height above the ground plane, and enable robots and vehicles to perform more confidently a variety of tasks by fusing images from visual cameras with range data (...

  3. Real-time image-based B-mode ultrasound image simulation of needles using tensor-product interpolation.

    Science.gov (United States)

    Zhu, Mengchen; Salcudean, Septimiu E

    2011-07-01

    In this paper, we propose an interpolation-based method for simulating rigid needles in B-mode ultrasound images in real time. We parameterize the needle B-mode image as a function of needle position and orientation. We collect needle images under various spatial configurations in a water-tank using a needle guidance robot. Then we use multidimensional tensor-product interpolation to simulate images of needles with arbitrary poses and positions using collected images. After further processing, the interpolated needle and seed images are superimposed on top of phantom or tissue image backgrounds. The similarity between the simulated and the real images is measured using a correlation metric. A comparison is also performed with in vivo images obtained during prostate brachytherapy. Our results, carried out for both the convex (transverse plane) and linear (sagittal/para-sagittal plane) arrays of a trans-rectal transducer indicate that our interpolation method produces good results while requiring modest computing resources. The needle simulation method we present can be extended to the simulation of ultrasound images of other wire-like objects. In particular, we have shown that the proposed approach can be used to simulate brachytherapy seeds.

  4. Stochastic approach to error estimation for image-guided robotic systems.

    Science.gov (United States)

    Haidegger, Tamas; Gyõri, Sándor; Benyo, Balazs; Benyó, Zoltáán

    2010-01-01

    Image-guided surgical systems and surgical robots are primarily developed to provide patient safety through increased precision and minimal invasiveness. Even more, robotic devices should allow for refined treatments that are not possible by other means. It is crucial to determine the accuracy of a system, to define the expected overall task execution error. A major step toward this aim is to quantitatively analyze the effect of registration and tracking-series of multiplication of erroneous homogeneous transformations. First, the currently used models and algorithms are introduced along with their limitations, and a new, probability distribution based method is described. The new approach has several advantages, as it was demonstrated in our simulations. Primarily, it determines the full 6 degree of freedom accuracy of the point of interest, allowing for the more accurate use of advanced application-oriented concepts, such as Virtual Fixtures. On the other hand, it becomes feasible to consider different surgical scenarios with varying weighting factors.

  5. Real-time multiple image manipulations

    International Nuclear Information System (INIS)

    Arenson, J.S.; Shalev, S.; Legris, J.; Goertzen, Y.

    1984-01-01

    There are many situations in which it is desired to manipulate two or more images under real-time operator control. The authors have investigated a number of such cases in order to determine their value and applicability in clinical medicine and laboratory research. Several examples are presented in detail. The DICOM-8 video image computer system was used due to its capability of storing two 512 x 512 x 8 bit images and operating on them, and/or an incoming video frame, with any of a number of real time operations including addition, subtraction, inversion, averaging, logical AND, NAND, OR, NOR, NOT, XOR and XNOR, as well as combinations of these. Some applications involve manipulations of or among the stored images. In others, a stored image is used as a mask or template for positioning or adjusting a second image to be grabbed via a video camera. The accuracy of radiotherapy treatment is verified by comparing port films with the original radiographic planning film, which is previously digitized and stored. Moving the port film on the light box while viewing the real-time subtraction image allows for adjustments of zoom, translation and rotation, together with contrast and edge enhancement

  6. Designing and implementing transparency for real time inspection of autonomous robots

    Science.gov (United States)

    Theodorou, Andreas; Wortham, Robert H.; Bryson, Joanna J.

    2017-07-01

    The EPSRC's Principles of Robotics advises the implementation of transparency in robotic systems, however research related to AI transparency is in its infancy. This paper introduces the reader of the importance of having transparent inspection of intelligent agents and provides guidance for good practice when developing such agents. By considering and expanding upon other prominent definitions found in literature, we provide a robust definition of transparency as a mechanism to expose the decision-making of a robot. The paper continues by addressing potential design decisions developers need to consider when designing and developing transparent systems. Finally, we describe our new interactive intelligence editor, designed to visualise, develop and debug real-time intelligence.

  7. Feasibility study of a hand guided robotic drill for cochleostomy.

    Science.gov (United States)

    Brett, Peter; Du, Xinli; Zoka-Assadi, Masoud; Coulson, Chris; Reid, Andrew; Proops, David

    2014-01-01

    The concept of a hand guided robotic drill has been inspired by an automated, arm supported robotic drill recently applied in clinical practice to produce cochleostomies without penetrating the endosteum ready for inserting cochlear electrodes. The smart tactile sensing scheme within the drill enables precise control of the state of interaction between tissues and tools in real-time. This paper reports development studies of the hand guided robotic drill where the same consistent outcomes, augmentation of surgeon control and skill, and similar reduction of induced disturbances on the hearing organ are achieved. The device operates with differing presentation of tissues resulting from variation in anatomy and demonstrates the ability to control or avoid penetration of tissue layers as required and to respond to intended rather than involuntary motion of the surgeon operator. The advantage of hand guided over an arm supported system is that it offers flexibility in adjusting the drilling trajectory. This can be important to initiate cutting on a hard convex tissue surface without slipping and then to proceed on the desired trajectory after cutting has commenced. The results for trials on phantoms show that drill unit compliance is an important factor in the design.

  8. [Exoskeleton robot system based on real-time gait analysis for walking assist].

    Science.gov (United States)

    Xie, Zheng; Wang, Mingjiang; Huang, Wulong; Yong, Shanshan; Wang, Xin'an

    2017-04-01

    This paper presents a wearable exoskeleton robot system to realize walking assist function, which oriented toward the patients or the elderly with the mild impairment of leg movement function, due to illness or natural aging. It reduces the loads of hip, knee, ankle and leg muscles during walking by way of weight support. In consideration of the characteristics of the psychological demands and the disease, unlike the weight loss system in the fixed or followed rehabilitation robot, the structure of the proposed exoskeleton robot is artistic, lightweight and portable. The exoskeleton system analyzes the user's gait real-timely by the plantar pressure sensors to divide gait phases, and present different control strategies for each gait phase. The pressure sensors in the seat of the exoskeleton system provide real-time monitoring of the support efforts. And the drive control uses proportion-integral-derivative (PID) control technology for torque control. The total weight of the robot system is about 12.5 kg. The average of the auxiliary support is about 10 kg during standing, and it is about 3 kg during walking. The system showed, in the experiments, a certain effect of weight support, and reduction of the pressure on the lower limbs to walk and stand.

  9. MR-based real time path planning for cardiac operations with transapical access.

    Science.gov (United States)

    Yeniaras, Erol; Navkar, Nikhil V; Sonmez, Ahmet E; Shah, Dipan J; Deng, Zhigang; Tsekos, Nikolaos V

    2011-01-01

    Minimally invasive surgeries (MIS) have been perpetually evolving due to their potential high impact on improving patient management and overall cost effectiveness. Currently, MIS are further strengthened by the incorporation of magnetic resonance imaging (MRI) for amended visualization and high precision. Motivated by the fact that real-time MRI is emerging as a feasible modality especially for guiding interventions and surgeries in the beating heart; in this paper we introduce a real-time path planning algorithm for intracardiac procedures. Our approach creates a volumetric safety zone inside a beating heart and updates it on-the-fly using real-time MRI during the deployment of a robotic device. In order to prove the concept and assess the feasibility of the introduced method, a realistic operational scenario of transapical aortic valve replacement in a beating heart is chosen as the virtual case study.

  10. Novel System for Real-Time Integration of 3-D Echocardiography and Fluoroscopy for Image-Guided Cardiac Interventions: Preclinical Validation and Clinical Feasibility Evaluation

    Science.gov (United States)

    Housden, R. James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C. Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D.; Razavi, Reza; Rhode, Kawal S.

    2014-01-01

    Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures. PMID:27170872

  11. Modbus RTU protocol and arduino IO package: A real time implementation of a 3 finger adaptive robot gripper

    Directory of Open Access Journals (Sweden)

    Sadun Amirul Syafiq

    2017-01-01

    Full Text Available Recently, the Modbus RTU protocol has been widely accepted in the application of robotics, communications and industrial control systems due to its simplicity and reliability. With the help of the MATLAB Instrument Control Toolbox, a serial communication between Simulink and a 3 Finger Adaptive Robot Gripper can be realized to demonstrate a grasping functionality. The toolbox includes a “to instrument” and “query instrument” programming blocks that enable the users to create a serial communication with the targeted hardware/robot. Similarly, the Simulink Arduino IO package also offers a real-time feature that enabled it to act as a DAQ device. This paper establishes a real-time robot control by using Modbus RTU and Arduino IO Package for a 3 Finger Adaptive Robot Gripper. The robot communication and grasping performance were successfully implemented and demonstrated. In particular, three (3 different grasping mode via normal, wide and pinch were tested. Moreover, the robot gripper’s feedback data, such as encoder position, motor current and the grasping force were easily measured and acquired in real-time. This certainly essential for future grasping analysis of a 3 Finger Adaptive Robot Gripper.

  12. Supervised Remote Robot with Guided Autonomy and Teleoperation (SURROGATE): A Framework for Whole-Body Manipulation

    Science.gov (United States)

    Hebert, Paul; Ma, Jeremy; Borders, James; Aydemir, Alper; Bajracharya, Max; Hudson, Nicolas; Shankar, Krishna; Karumanchi, Sisir; Douillard, Bertrand; Burdick, Joel

    2015-01-01

    The use of the cognitive capabilties of humans to help guide the autonomy of robotics platforms in what is typically called "supervised-autonomy" is becoming more commonplace in robotics research. The work discussed in this paper presents an approach to a human-in-the-loop mode of robot operation that integrates high level human cognition and commanding with the intelligence and processing power of autonomous systems. Our framework for a "Supervised Remote Robot with Guided Autonomy and Teleoperation" (SURROGATE) is demonstrated on a robotic platform consisting of a pan-tilt perception head, two 7-DOF arms connected by a single 7-DOF torso, mounted on a tracked-wheel base. We present an architecture that allows high-level supervisory commands and intents to be specified by a user that are then interpreted by the robotic system to perform whole body manipulation tasks autonomously. We use a concept of "behaviors" to chain together sequences of "actions" for the robot to perform which is then executed real time.

  13. Real time magnetic resonance guided endomyocardial local delivery

    Science.gov (United States)

    Corti, R; Badimon, J; Mizsei, G; Macaluso, F; Lee, M; Licato, P; Viles-Gonzalez, J F; Fuster, V; Sherman, W

    2005-01-01

    Objective: To investigate the feasibility of targeting various areas of left ventricle myocardium under real time magnetic resonance (MR) imaging with a customised injection catheter equipped with a miniaturised coil. Design: A needle injection catheter with a mounted resonant solenoid circuit (coil) at its tip was designed and constructed. A 1.5 T MR scanner with customised real time sequence combined with in-room scan running capabilities was used. With this system, various myocardial areas within the left ventricle were targeted and injected with a gadolinium-diethylenetriaminepentaacetic acid (DTPA) and Indian ink mixture. Results: Real time sequencing at 10 frames/s allowed clear visualisation of the moving catheter and its transit through the aorta into the ventricle, as well as targeting of all ventricle wall segments without further image enhancement techniques. All injections were visualised by real time MR imaging and verified by gross pathology. Conclusion: The tracking device allowed real time in vivo visualisation of catheters in the aorta and left ventricle as well as precise targeting of myocardial areas. The use of this real time catheter tracking may enable precise and adequate delivery of agents for tissue regeneration. PMID:15710717

  14. Virtual Reality, 3D Stereo Visualization, and Applications in Robotics

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2006-01-01

    , while little can be found about the advantages of stereoscopic visualization in mobile robot tele-guide applications. This work investigates stereoscopic robot tele-guide under different conditions, including typical navigation scenarios and the use of synthetic and real images. This work also...

  15. Prospective phase II study of image-guided local boost using a real-time tumor-tracking radiotherapy (RTRT) system for locally advanced bladder cancer

    International Nuclear Information System (INIS)

    Nishioka, Kentaro; Shimizu, Shinichi; Shinohara, Nobuo

    2014-01-01

    The real-time tumor-tracking radiotherapy system with fiducial markers has the advantage that it can be used to verify the localization of the markers during radiation delivery in real-time. We conducted a prospective Phase II study of image-guided local-boost radiotherapy for locally advanced bladder cancer using a real-time tumor-tracking radiotherapy system for positioning, and here we report the results regarding the safety and efficacy of the technique. Twenty patients with a T2-T4N0M0 urothelial carcinoma of the bladder who were clinically inoperable or refused surgery were enrolled. Transurethral tumor resection and 40 Gy irradiation to the whole bladder was followed by the transurethral endoscopic implantation of gold markers in the bladder wall around the primary tumor. A boost of 25 Gy in 10 fractions was made to the primary tumor while maintaining the displacement from the planned position at less than ±2 mm during radiation delivery using a real-time tumor-tracking radiotherapy system. The toxicity, local control and survival were evaluated. Among the 20 patients, 14 were treated with concurrent chemoradiotherapy. The median follow-up period was 55.5 months. Urethral and bowel late toxicity (Grade 3) were each observed in one patient. The local-control rate, overall survival and cause-specific survival with the native bladder after 5 years were 64, 61 and 65%. Image-guided local-boost radiotherapy using a real-time tumor-tracking radiotherapy system can be safely accomplished, and the clinical outcome is encouraging. A larger prospective multi-institutional study is warranted for more precise evaluations of the technological efficacy and patients' quality of life. (author)

  16. Robot-assisted 3D-TRUS guided prostate brachytherapy: System integration and validation

    International Nuclear Information System (INIS)

    Wei Zhouping; Wan Gang; Gardi, Lori; Mills, Gregory; Downey, Donal; Fenster, Aaron

    2004-01-01

    Current transperineal prostate brachytherapy uses transrectal ultrasound (TRUS) guidance and a template at a fixed position to guide needles along parallel trajectories. However, pubic arch interference (PAI) with the implant path obstructs part of the prostate from being targeted by the brachytherapy needles along parallel trajectories. To solve the PAI problem, some investigators have explored other insertion trajectories than parallel, i.e., oblique. However, parallel trajectory constraints in current brachytherapy procedure do not allow oblique insertion. In this paper, we describe a robot-assisted, three-dimensional (3D) TRUS guided approach to solve this problem. Our prototype consists of a commercial robot, and a 3D TRUS imaging system including an ultrasound machine, image acquisition apparatus and 3D TRUS image reconstruction, and display software. In our approach, we use the robot as a movable needle guide, i.e., the robot positions the needle before insertion, but the physician inserts the needle into the patient's prostate. In a later phase of our work, we will include robot insertion. By unifying the robot, ultrasound transducer, and the 3D TRUS image coordinate systems, the position of the template hole can be accurately related to 3D TRUS image coordinate system, allowing accurate and consistent insertion of the needle via the template hole into the targeted position in the prostate. The unification of the various coordinate systems includes two steps, i.e., 3D image calibration and robot calibration. Our testing of the system showed that the needle placement accuracy of the robot system at the 'patient's' skin position was 0.15 mm±0.06 mm, and the mean needle angulation error was 0.07 deg. . The fiducial localization error (FLE) in localizing the intersections of the nylon strings for image calibration was 0.13 mm, and the FLE in localizing the divots for robot calibration was 0.37 mm. The fiducial registration error for image calibration was 0

  17. Real-time image processing of TOF range images using a reconfigurable processor system

    Science.gov (United States)

    Hussmann, S.; Knoll, F.; Edeler, T.

    2011-07-01

    During the last years, Time-of-Flight sensors achieved a significant impact onto research fields in machine vision. In comparison to stereo vision system and laser range scanners they combine the advantages of active sensors providing accurate distance measurements and camera-based systems recording a 2D matrix at a high frame rate. Moreover low cost 3D imaging has the potential to open a wide field of additional applications and solutions in markets like consumer electronics, multimedia, digital photography, robotics and medical technologies. This paper focuses on the currently implemented 4-phase-shift algorithm in this type of sensors. The most time critical operation of the phase-shift algorithm is the arctangent function. In this paper a novel hardware implementation of the arctangent function using a reconfigurable processor system is presented and benchmarked against the state-of-the-art CORDIC arctangent algorithm. Experimental results show that the proposed algorithm is well suited for real-time processing of the range images of TOF cameras.

  18. Software Strategy for Robotic Transperineal Prostate Therapy in Closed-Bore MRI

    Science.gov (United States)

    Tokuda, Junichi; Fischer, Gregory S.; Csoma, Csaba; DiMaio, Simon P.; Gobbi, David G.; Fichtinger, Gabor; Tempany, Clare M.; Hata, Nobuhiko

    2009-01-01

    A software strategy to provide intuitive navigation for MRI-guided robotic transperineal prostate therapy is presented. In the system, the robot control unit, the MRI scanner, and open-source navigation software are connected to one another via Ethernet to exchange commands, coordinates, and images. Six states of the system called “workphases” are defined based on the clinical scenario to synchronize behaviors of all components. The wizard-style user interface allows easy following of the clinical workflow. On top of this framework, the software provides features for intuitive needle guidance: interactive target planning; 3D image visualization with current needle position; treatment monitoring through real-time MRI. These features are supported by calibration of robot and image coordinates by the fiducial-based registration. The performance test shows that the registration error of the system was 2.6 mm in the prostate area, and it displayed real-time 2D image 1.7 s after the completion of image acquisition. PMID:18982666

  19. Development of the robot system to assist CT-guided brain surgery

    International Nuclear Information System (INIS)

    Koyama, H.; Funakubo, H.; Komeda, T.; Uchida, T.; Takakura, K.

    1999-01-01

    The robot technology was introduced into the stereotactic neurosurgery for application to biopsy, blind surgery, and functional neurosurgery. The authors have developed a newly designed the robot system to assist CT-guided brain surgery, designed to allow a biopsy needle to reach the targget such as a cerebral tumor within a brain automatically on the basis of the X,Y, and Z coordinates obtained by CT scanner. In this paper we describe construction of the robot, the control of the robot by CT image, robot simulation, and investigated a phantom experiment using CT image. (author)

  20. Real-Time Visualization System for Deep-Sea Surveying

    Directory of Open Access Journals (Sweden)

    Yujie Li

    2014-01-01

    Full Text Available Remote robotic exploration holds vast potential for gaining knowledge about extreme environments, which is difficult to be accessed by humans. In the last two decades, various underwater devices were developed for detecting the mines and mine-like objects in the deep-sea environment. However, there are some problems in recent equipment, like poor accuracy of mineral objects detection, without real-time processing, and low resolution of underwater video frames. Consequently, the underwater objects recognition is a difficult task, because the physical properties of the medium, the captured video frames, are distorted seriously. In this paper, we are considering use of the modern image processing methods to determine the mineral location and to recognize the mineral actually within a little computation complex. We firstly analyze the recent underwater imaging models and propose a novel underwater optical imaging model, which is much closer to the light propagation model in the underwater environment. In our imaging system, we remove the electrical noise by dual-tree complex wavelet transform. And then we solve the nonuniform illumination of artificial lights by fast guided trilateral bilateral filter and recover the image color through automatic color equalization. Finally, a shape-based mineral recognition algorithm is proposed for underwater objects detection. These methods are designed for real-time execution on limited-memory platforms. This pipeline is suitable for detecting underwater objects in practice by our experiences. The initial results are presented and experiments demonstrate the effectiveness of the proposed real-time visualization system.

  1. Pneumatically Operated MRI-Compatible Needle Placement Robot for Prostate Interventions.

    Science.gov (United States)

    Fischer, Gregory S; Iordachita, Iulian; Csoma, Csaba; Tokuda, Junichi; Mewes, Philip W; Tempany, Clare M; Hata, Nobuhiko; Fichtinger, Gabor

    2008-06-13

    Magnetic Resonance Imaging (MRI) has potential to be a superior medical imaging modality for guiding and monitoring prostatic interventions. The strong magnetic field prevents the use of conventional mechatronics and the confined physical space makes it extremely challenging to access the patient. We have designed a robotic assistant system that overcomes these difficulties and promises safe and reliable intra-prostatic needle placement inside closed high-field MRI scanners. The robot performs needle insertion under real-time 3T MR image guidance; workspace requirements, MR compatibility, and workflow have been evaluated on phantoms. The paper explains the robot mechanism and controller design and presents results of preliminary evaluation of the system.

  2. Feasibility Study of a Hand Guided Robotic Drill for Cochleostomy

    Directory of Open Access Journals (Sweden)

    Peter Brett

    2014-01-01

    Full Text Available The concept of a hand guided robotic drill has been inspired by an automated, arm supported robotic drill recently applied in clinical practice to produce cochleostomies without penetrating the endosteum ready for inserting cochlear electrodes. The smart tactile sensing scheme within the drill enables precise control of the state of interaction between tissues and tools in real-time. This paper reports development studies of the hand guided robotic drill where the same consistent outcomes, augmentation of surgeon control and skill, and similar reduction of induced disturbances on the hearing organ are achieved. The device operates with differing presentation of tissues resulting from variation in anatomy and demonstrates the ability to control or avoid penetration of tissue layers as required and to respond to intended rather than involuntary motion of the surgeon operator. The advantage of hand guided over an arm supported system is that it offers flexibility in adjusting the drilling trajectory. This can be important to initiate cutting on a hard convex tissue surface without slipping and then to proceed on the desired trajectory after cutting has commenced. The results for trials on phantoms show that drill unit compliance is an important factor in the design.

  3. Real-Time Performance of Hybrid Mobile Robot Control Utilizing USB Protocol

    Directory of Open Access Journals (Sweden)

    Jacek Augustyn

    2015-02-01

    Full Text Available This article discusses the problem of usability of the USB 2.0 protocol in the area of real-time control of a mobile robot. Optimization methods of data transfer handling were proposed. The impact of the optimization results on the entire system's performance was examined in practice. As a test-bed, a hybrid system composed of two devices communicating by direct USB connection was implemented. The first of the mentioned devices was a 32-bit SoC micro-system serving as a direct control unit, and the second one was an off-the-shelf PDA providing supervisory control and logging. Due to this design, the system meets regimes of the real-time constraints and maintains continuity of a data stream at a large bandwidth. The real-time performances of subsystems and the entire system were experimentally examined depending on various operating conditions. Thanks to the performed experiments, the dependency of real-time limits on operational parameters has been determined.

  4. Evaluation of Real-time Measurement Liver Tumor's Movement and SynchronyTM System's Accuracy of Radiosurgery using a Robot CyberKnife

    International Nuclear Information System (INIS)

    Kim, Gha Jung; Shim, Su Jung; Kim, Jeong Ho; Min, Chul Kee; Chung, Weon Kuu

    2008-01-01

    This study aimed to quantitatively measure the movement of tumors in real-time and evaluate the treatment accuracy, during the treatment of a liver tumor patient, who underwent radiosurgery with a Synchrony Respiratory motion tracking system of a robot CyberKnife. Materials and Methods: The study subjects included 24 liver tumor patients who underwent CyberKnife treatment, which included 64 times of treatment with the Synchrony Respiratory motion tracking system (SynchronyTM). The treatment involved inserting 4 to 6 acupuncture needles into the vicinity of the liver tumor in all the patients using ultrasonography as a guide. A treatment plan was set up using the CT images for treatment planning uses. The position of the acupuncture needle was identified for every treatment time by Digitally Reconstructed Radiography (DRR) prepared at the time of treatment planning and X-ray images photographed in real-time. Subsequent results were stored through a Motion Tracking System (MTS) using the Mtsmain.log treatment file. In this way, movement of the tumor was measured. Besides, the accuracy of radiosurgery using CyberKnife was evaluated by the correlation errors between the real-time positions of the acupuncture needles and the predicted coordinates. Results: The maximum and the average translational movement of the liver tumor were measured 23.5 mm and 13.9±5.5 mm, respectively from the superior to the inferior direction, 3.9 mm and 1.9±0.9 mm, respectively from left to right, and 8.3 mm and 4.9±1.9 mm, respectively from the anterior to the posterior direction. The maximum and the average rotational movement of the liver tumor were measured to be 3.3o and 2.6±1.3o, respectively for X (Left-Right) axis rotation, 4.8o and 2.3±1.0o, respectively for Y (Cranio-Caudal) axis rotation, 3.9o and 2.8±1.1o, respectively for Z (Anterior-Posterior) axis rotation. In addition, the average correlation error, which represents the treatment's accuracy was 1.1±0.7 mm. Conclusion

  5. Comparison of a GPS needle-tracking system, multiplanar imaging and 2D imaging for real-time ultrasound-guided epidural anaesthesia: A randomized, comparative, observer-blinded study on phantoms.

    Science.gov (United States)

    Menacé, Cécilia; Choquet, Olivier; Abbal, Bertrand; Bringuier, Sophie; Capdevila, Xavier

    2017-04-01

    The real-time ultrasound-guided paramedian sagittal oblique approach for neuraxial blockade is technically demanding. Innovative technologies have been developed to improve nerve identification and the accuracy of needle placement. The aim of this study was to evaluate three types of ultrasound scans during ultrasound-guided epidural lumbar punctures in a spine phantom. Eleven sets of 20 ultrasound-guided epidural punctures were performed with 2D, GPS, and multiplanar ultrasound machines (660 punctures) on a spine phantom using an in-plane approach. For all punctures, execution time, number of attempts, bone contacts, and needle redirections were noted by an independent physician. Operator comfort and visibility of the needle (tip and shaft) were measured using a numerical scale. The use of GPS significantly decreased the number of punctures, needle repositionings, and bone contacts. Comfort of the physician was also significantly improved with the GPS system compared with the 2D and multiplanar systems. With the multiplanar system, the procedure was not facilitated and execution time was longer compared with 2D imaging after Bonferroni correction but interaction between the type of ultrasound system and mean execution time was not significant in a linear mixed model. There were no significant differences regarding needle tip and shaft visibility between the systems. Multiplanar and GPS needle-tracking systems do not reduce execution time compared with 2D imaging using a real-time ultrasound-guided paramedian sagittal oblique approach in spine phantoms. The GPS needle-tracking system can improve performance in terms of operator comfort, the number of attempts, needle redirections and bone contacts. Copyright © 2016 Société française d'anesthésie et de réanimation (Sfar). Published by Elsevier Masson SAS. All rights reserved.

  6. Evolutionary online behaviour learning and adaptation in real robots.

    Science.gov (United States)

    Silva, Fernando; Correia, Luís; Christensen, Anders Lyhne

    2017-07-01

    Online evolution of behavioural control on real robots is an open-ended approach to autonomous learning and adaptation: robots have the potential to automatically learn new tasks and to adapt to changes in environmental conditions, or to failures in sensors and/or actuators. However, studies have so far almost exclusively been carried out in simulation because evolution in real hardware has required several days or weeks to produce capable robots. In this article, we successfully evolve neural network-based controllers in real robotic hardware to solve two single-robot tasks and one collective robotics task. Controllers are evolved either from random solutions or from solutions pre-evolved in simulation. In all cases, capable solutions are found in a timely manner (1 h or less). Results show that more accurate simulations may lead to higher-performing controllers, and that completing the optimization process in real robots is meaningful, even if solutions found in simulation differ from solutions in reality. We furthermore demonstrate for the first time the adaptive capabilities of online evolution in real robotic hardware, including robots able to overcome faults injected in the motors of multiple units simultaneously, and to modify their behaviour in response to changes in the task requirements. We conclude by assessing the contribution of each algorithmic component on the performance of the underlying evolutionary algorithm.

  7. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    Directory of Open Access Journals (Sweden)

    Richard Chiou

    2010-06-01

    Full Text Available This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote controlling of the robots. The uniqueness of the project lies in making this process Internet-based, and remote robot operated and visualized in 3D. This 3D system approach provides the students with a more realistic feel of the 3D robotic laboratory even though they are working remotely. As a result, the 3D visualization technology has been tested as part of a laboratory in the MET 205 Robotics and Mechatronics class and has received positive feedback by most of the students. This type of research has introduced a new level of realism and visual communications to online laboratory learning in a remote classroom.

  8. Vision Guided Intelligent Robot Design And Experiments

    Science.gov (United States)

    Slutzky, G. D.; Hall, E. L.

    1988-02-01

    The concept of an intelligent robot is an important topic combining sensors, manipulators, and artificial intelligence to design a useful machine. Vision systems, tactile sensors, proximity switches and other sensors provide the elements necessary for simple game playing as well as industrial applications. These sensors permit adaption to a changing environment. The AI techniques permit advanced forms of decision making, adaptive responses, and learning while the manipulator provides the ability to perform various tasks. Computer languages such as LISP and OPS5, have been utilized to achieve expert systems approaches in solving real world problems. The purpose of this paper is to describe several examples of visually guided intelligent robots including both stationary and mobile robots. Demonstrations will be presented of a system for constructing and solving a popular peg game, a robot lawn mower, and a box stacking robot. The experience gained from these and other systems provide insight into what may be realistically expected from the next generation of intelligent machines.

  9. A Concentric Tube Continuum Robot with Piezoelectric Actuation for MRI-Guided Closed-Loop Targeting

    Science.gov (United States)

    Su, Hao; Li, Gang; Rucker, D. Caleb; Webster, Robert J.; Fischer, Gregory S.

    2017-01-01

    This paper presents the design, modeling and experimental evaluation of a magnetic resonance imaging (MRI)-compatible concentric tube continuum robotic system. This system enables MRI-guided deployment of a precurved and steerable concentric tube continuum mechanism, and is suitable for clinical applications where a curved trajectory is needed. This compact 6 degree-of-freedom (DOF) robotic system is piezoelectrically-actuated, and allows simultaneous robot motion and imaging with no visually observable image artifact. The targeting accuracy is evaluated with optical tracking system and gelatin phantom under live MRI-guidance with Root Mean Square (RMS) errors of 1.94 and 2.17 mm respectively. Furthermore, we demonstrate that the robot has kinematic redundancy to reach the same target through different paths. This was evaluated in both free space and MRI-guided gelatin phantom trails, with RMS errors of 0.48 and 0.59 mm respectively. As the first of its kind, MRI-guided targeted concentric tube needle placements with ex vivo porcine liver are demonstrated with 4.64 mm RMS error through closed-loop control of the piezoelectrically-actuated robot. PMID:26983842

  10. Integration of multidisciplinary technologies for real time target visualization and verification for radiotherapy.

    Science.gov (United States)

    Chang, Wen-Chung; Chen, Chin-Sheng; Tai, Hung-Chi; Liu, Chia-Yuan; Chen, Yu-Jen

    2014-01-01

    The current practice of radiotherapy examines target coverage solely from digitally reconstructed beam's eye view (BEV) in a way that is indirectly accessible and that is not in real time. We aimed to visualize treatment targets in real time from each BEV. The image data of phantom or patients from ultrasound (US) and computed tomography (CT) scans were captured to perform image registration. We integrated US, CT, US/CT image registration, robotic manipulation of US, a radiation treatment planning system, and a linear accelerator to constitute an innovative target visualization system. The performance of this algorithm segmented the target organ in CT images, transformed and reconstructed US images to match each orientation, and generated image registration in real time mode with acceptable accuracy. This image transformation allowed physicians to visualize the CT image-reconstructed target via a US probe outside the BEV that was non-coplanar to the beam's plane. It allowed the physicians to remotely control the US probe that was equipped on a robotic arm to dynamically trace and real time monitor the coverage of the target within the BEV during a simulated beam-on situation. This target visualization system may provide a direct remotely accessible and real time way to visualize, verify, and ensure tumor targeting during radiotherapy.

  11. Pneumatically Operated MRI-Compatible Needle Placement Robot for Prostate Interventions

    Science.gov (United States)

    Fischer, Gregory S.; Iordachita, Iulian; Csoma, Csaba; Tokuda, Junichi; Mewes, Philip W.; Tempany, Clare M.; Hata, Nobuhiko; Fichtinger, Gabor

    2011-01-01

    Magnetic Resonance Imaging (MRI) has potential to be a superior medical imaging modality for guiding and monitoring prostatic interventions. The strong magnetic field prevents the use of conventional mechatronics and the confined physical space makes it extremely challenging to access the patient. We have designed a robotic assistant system that overcomes these difficulties and promises safe and reliable intra-prostatic needle placement inside closed high-field MRI scanners. The robot performs needle insertion under real-time 3T MR image guidance; workspace requirements, MR compatibility, and workflow have been evaluated on phantoms. The paper explains the robot mechanism and controller design and presents results of preliminary evaluation of the system. PMID:21686038

  12. Integration of multidisciplinary technologies for real time target visualization and verification for radiotherapy

    Directory of Open Access Journals (Sweden)

    Chang WC

    2014-06-01

    Full Text Available Wen-Chung Chang,1,* Chin-Sheng Chen,2,* Hung-Chi Tai,3 Chia-Yuan Liu,4,5 Yu-Jen Chen3 1Department of Electrical Engineering, National Taipei University of Technology, Taipei, Taiwan; 2Graduate Institute of Automation Technology, National Taipei University of Technology, Taipei, Taiwan; 3Department of Radiation Oncology, Mackay Memorial Hospital, Taipei, Taiwan; 4Department of Internal Medicine, Mackay Memorial Hospital, Taipei, Taiwan; 5Department of Medicine, Mackay Medical College, New Taipei City, Taiwan  *These authors contributed equally to this work Abstract: The current practice of radiotherapy examines target coverage solely from digitally reconstructed beam's eye view (BEV in a way that is indirectly accessible and that is not in real time. We aimed to visualize treatment targets in real time from each BEV. The image data of phantom or patients from ultrasound (US and computed tomography (CT scans were captured to perform image registration. We integrated US, CT, US/CT image registration, robotic manipulation of US, a radiation treatment planning system, and a linear accelerator to constitute an innovative target visualization system. The performance of this algorithm segmented the target organ in CT images, transformed and reconstructed US images to match each orientation, and generated image registration in real time mode with acceptable accuracy. This image transformation allowed physicians to visualize the CT image-reconstructed target via a US probe outside the BEV that was non-coplanar to the beam's plane. It allowed the physicians to remotely control the US probe that was equipped on a robotic arm to dynamically trace and real time monitor the coverage of the target within the BEV during a simulated beam-on situation. This target visualization system may provide a direct remotely accessible and real time way to visualize, verify, and ensure tumor targeting during radiotherapy. Keywords: ultrasound, computerized tomography

  13. A robotic system for 18F-FMISO PET-guided intratumoral pO2 measurements.

    Science.gov (United States)

    Chang, Jenghwa; Wen, Bixiu; Kazanzides, Peter; Zanzonico, Pat; Finn, Ronald D; Fichtinger, Gabor; Ling, C Clifton

    2009-11-01

    An image-guided robotic system was used to measure the oxygen tension (pO2) in rodent tumor xenografts using interstitial probes guided by tumor hypoxia PET images. Rats with approximately 1 cm diameter tumors were anesthetized and immobilized in a custom-fabricated whole-body mold. Imaging was performed using a dedicated small-animal PET scanner (R4 or Focus 120 microPET) approximately 2 h after the injection of the hypoxia tracer 18F-fluoromisonidazole (18F-FMISO). The coordinate systems of the robot and PET were registered based on fiducial markers in the rodent bed visible on the PET images. Guided by the 3D microPET image set, measurements were performed at various locations in the tumor and compared to the corresponding 18F-FMISO image intensity at the respective measurement points. Experiments were performed on four tumor-bearing rats with 4 (86), 3 (80), 7 (162), and 8 (235) measurement tracks (points) for each experiment. The 18F-FMISO image intensities were inversely correlated with the measured pO2, with a Pearson coefficient ranging from -0.14 to -0.97 for the 22 measurement tracks. The cumulative scatterplots of pO2 versus image intensity yielded a hyperbolic relationship, with correlation coefficients of 0.52, 0.48, 0.64, and 0.73, respectively, for the four tumors. In conclusion, PET image-guided pO2 measurement is feasible with this robot system and, more generally, this system will permit point-by-point comparison of physiological probe measurements and image voxel values as a means of validating molecularly targeted radiotracers. Although the overall data fitting suggested that 18F-FMISO may be an effective hypoxia marker, the use of static 18F-FMISO PET postinjection scans to guide radiotherapy might be problematic due to the observed high variation in some individual data pairs from the fitted curve, indicating potential temporal fluctuation of oxygen tension in individual voxels or possible suboptimal imaging time postadministration of hypoxia

  14. Robotic System for MRI-Guided Stereotactic Neurosurgery

    Science.gov (United States)

    Li, Gang; Cole, Gregory A.; Shang, Weijian; Harrington, Kevin; Camilo, Alex; Pilitsis, Julie G.; Fischer, Gregory S.

    2015-01-01

    Stereotaxy is a neurosurgical technique that can take several hours to reach a specific target, typically utilizing a mechanical frame and guided by preoperative imaging. An error in any one of the numerous steps or deviations of the target anatomy from the preoperative plan such as brain shift (up to 20 mm), may affect the targeting accuracy and thus the treatment effectiveness. Moreover, because the procedure is typically performed through a small burr hole opening in the skull that prevents tissue visualization, the intervention is basically “blind” for the operator with limited means of intraoperative confirmation that may result in reduced accuracy and safety. The presented system is intended to address the clinical needs for enhanced efficiency, accuracy, and safety of image-guided stereotactic neurosurgery for Deep Brain Stimulation (DBS) lead placement. The work describes a magnetic resonance imaging (MRI)-guided, robotically actuated stereotactic neural intervention system for deep brain stimulation procedure, which offers the potential of reducing procedure duration while improving targeting accuracy and enhancing safety. This is achieved through simultaneous robotic manipulation of the instrument and interactively updated in situ MRI guidance that enables visualization of the anatomy and interventional instrument. During simultaneous actuation and imaging, the system has demonstrated less than 15% signal-to-noise ratio (SNR) variation and less than 0.20% geometric distortion artifact without affecting the imaging usability to visualize and guide the procedure. Optical tracking and MRI phantom experiments streamline the clinical workflow of the prototype system, corroborating targeting accuracy with 3-axis root mean square error 1.38 ± 0.45 mm in tip position and 2.03 ± 0.58° in insertion angle. PMID:25376035

  15. External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising

    Science.gov (United States)

    Xu, Jun; Zhang, Lei; Zhang, David

    2018-06-01

    Most of existing image denoising methods learn image priors from either external data or the noisy image itself to remove noise. However, priors learned from external data may not be adaptive to the image to be denoised, while priors learned from the given noisy image may not be accurate due to the interference of corrupted noise. Meanwhile, the noise in real-world noisy images is very complex, which is hard to be described by simple distributions such as Gaussian distribution, making real noisy image denoising a very challenging problem. We propose to exploit the information in both external data and the given noisy image, and develop an external prior guided internal prior learning method for real noisy image denoising. We first learn external priors from an independent set of clean natural images. With the aid of learned external priors, we then learn internal priors from the given noisy image to refine the prior model. The external and internal priors are formulated as a set of orthogonal dictionaries to efficiently reconstruct the desired image. Extensive experiments are performed on several real noisy image datasets. The proposed method demonstrates highly competitive denoising performance, outperforming state-of-the-art denoising methods including those designed for real noisy images.

  16. Real-time geometric scene estimation for RGBD images using a 3D box shape grammar

    Science.gov (United States)

    Willis, Andrew R.; Brink, Kevin M.

    2016-06-01

    This article describes a novel real-time algorithm for the purpose of extracting box-like structures from RGBD image data. In contrast to conventional approaches, the proposed algorithm includes two novel attributes: (1) it divides the geometric estimation procedure into subroutines having atomic incremental computational costs, and (2) it uses a generative "Block World" perceptual model that infers both concave and convex box elements from detection of primitive box substructures. The end result is an efficient geometry processing engine suitable for use in real-time embedded systems such as those on an UAVs where it is intended to be an integral component for robotic navigation and mapping applications.

  17. Robotics in pediatric surgery: perspectives for imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kant, Adrien J.; Klein, Michael D. [Stuart Frankel Foundation Computer-Assisted Robot-Enhanced Surgery Program, Children' s Research Center of Michigan, Detroit, MI 48201 (United States); Langenburg, Scott E. [Stuart Frankel Foundation Computer-Assisted Robot-Enhanced Surgery Program, Children' s Research Center of Michigan, Detroit, MI 48201 (United States); Department of Pediatric Surgery, Children' s Hospital of Michigan, 3901 Beaubien, Detroit, MI 48201 (United States)

    2004-06-01

    Robotic surgery will give surgeons the ability to perform essentially tremorless microsurgery in tiny spaces with delicate precision and may enable procedures never before possible on children, neonates, and fetuses. Collaboration with radiologists, engineers, and other scientists will permit refinement of image-guided technologies and allow the realization of truly remarkable concepts in minimally invasive surgery. While robotic surgery is now in clinical use in several surgical specialties (heart bypass, prostate removal, and various gastrointestinal procedures), the greatest promise of robotics lies in pediatric surgery. We will briefly review the history and background of robotic technology in surgery, discuss its present benefits and uses and those being explored, and speculate on the future, with attention to the current and potential involvement of imaging modalities and the role of image guidance. (orig.)

  18. Robotics in pediatric surgery: perspectives for imaging

    International Nuclear Information System (INIS)

    Kant, Adrien J.; Klein, Michael D.; Langenburg, Scott E.

    2004-01-01

    Robotic surgery will give surgeons the ability to perform essentially tremorless microsurgery in tiny spaces with delicate precision and may enable procedures never before possible on children, neonates, and fetuses. Collaboration with radiologists, engineers, and other scientists will permit refinement of image-guided technologies and allow the realization of truly remarkable concepts in minimally invasive surgery. While robotic surgery is now in clinical use in several surgical specialties (heart bypass, prostate removal, and various gastrointestinal procedures), the greatest promise of robotics lies in pediatric surgery. We will briefly review the history and background of robotic technology in surgery, discuss its present benefits and uses and those being explored, and speculate on the future, with attention to the current and potential involvement of imaging modalities and the role of image guidance. (orig.)

  19. Floor Covering and Surface Identification for Assistive Mobile Robotic Real-Time Room Localization Application

    Directory of Open Access Journals (Sweden)

    Michael Gillham

    2013-12-01

    Full Text Available Assistive robotic applications require systems capable of interaction in the human world, a workspace which is highly dynamic and not always predictable. Mobile assistive devices face the additional and complex problem of when and if intervention should occur; therefore before any trajectory assistance is given, the robotic device must know where it is in real-time, without unnecessary disruption or delay to the user requirements. In this paper, we demonstrate a novel robust method for determining room identification from floor features in a real-time computational frame for autonomous and assistive robotics in the human environment. We utilize two inexpensive sensors: an optical mouse sensor for straightforward and rapid, texture or pattern sampling, and a four color photodiode light sensor for fast color determination. We show how data relating floor texture and color obtained from typical dynamic human environments, using these two sensors, compares favorably with data obtained from a standard webcam. We show that suitable data can be extracted from these two sensors at a rate 16 times faster than a standard webcam, and that these data are in a form which can be rapidly processed using readily available classification techniques, suitable for real-time system application. We achieved a 95% correct classification accuracy identifying 133 rooms’ flooring from 35 classes, suitable for fast coarse global room localization application, boundary crossing detection, and additionally some degree of surface type identification.

  20. Development of a Pneumatic Robot for MRI-guided Transperineal Prostate Biopsy and Brachytherapy: New Approaches

    Science.gov (United States)

    Song, Sang-Eun; Cho, Nathan B.; Fischer, Gregory; Hata, Nobuhito; Tempany, Clare; Fichtinger, Gabor; Iordachita, Iulian

    2011-01-01

    Magnetic Resonance Imaging (MRI) guided prostate biopsy and brachytherapy has been introduced in order to enhance the cancer detection and treatment. For the accurate needle positioning, a number of robotic assistants have been developed. However, problems exist due to the strong magnetic field and limited workspace. Pneumatically actuated robots have shown the minimum distraction in the environment but the confined workspace limits optimal robot design and thus controllability is often poor. To overcome the problem, a simple external damping mechanism using timing belts was sought and a 1-DOF mechanism test result indicated sufficient positioning accuracy. Based on the damping mechanism and modular system design approach, a new workspace-optimized 4-DOF parallel robot was developed for the MRI-guided prostate biopsy and brachytherapy. A preliminary evaluation of the robot was conducted using previously developed pneumatic controller and satisfying results were obtained. PMID:21399734

  1. Piezoelectrically Actuated Robotic System for MRI-Guided Prostate Percutaneous Therapy

    OpenAIRE

    Su, Hao; Shang, Weijian; Cole, Gregory; Li, Gang; Harrington, Kevin; Camilo, Alexander; Tokuda, Junichi; Tempany, Clare M.; Hata, Nobuhiko; Fischer, Gregory S.

    2014-01-01

    This paper presents a fully-actuated robotic system for percutaneous prostate therapy under continuously acquired live magnetic resonance imaging (MRI) guidance. The system is composed of modular hardware and software to support the surgical workflow of intra-operative MRI-guided surgical procedures. We present the development of a 6-degree-of-freedom (DOF) needle placement robot for transperineal prostate interventions. The robot consists of a 3-DOF needle driver module and a 3-DOF Cartesian...

  2. 2D-3D radiograph to cone-beam computed tomography (CBCT) registration for C-arm image-guided robotic surgery.

    Science.gov (United States)

    Liu, Wen Pei; Otake, Yoshito; Azizian, Mahdi; Wagner, Oliver J; Sorger, Jonathan M; Armand, Mehran; Taylor, Russell H

    2015-08-01

    C-arm radiographs are commonly used for intraoperative image guidance in surgical interventions. Fluoroscopy is a cost-effective real-time modality, although image quality can vary greatly depending on the target anatomy. Cone-beam computed tomography (CBCT) scans are sometimes available, so 2D-3D registration is needed for intra-procedural guidance. C-arm radiographs were registered to CBCT scans and used for 3D localization of peritumor fiducials during a minimally invasive thoracic intervention with a da Vinci Si robot. Intensity-based 2D-3D registration of intraoperative radiographs to CBCT was performed. The feasible range of X-ray projections achievable by a C-arm positioned around a da Vinci Si surgical robot, configured for robotic wedge resection, was determined using phantom models. Experiments were conducted on synthetic phantoms and animals imaged with an OEC 9600 and a Siemens Artis zeego, representing the spectrum of different C-arm systems currently available for clinical use. The image guidance workflow was feasible using either an optically tracked OEC 9600 or a Siemens Artis zeego C-arm, resulting in an angular difference of Δθ:∼ 30°. The two C-arm systems provided TRE mean ≤ 2.5 mm and TRE mean ≤ 2.0 mm, respectively (i.e., comparable to standard clinical intraoperative navigation systems). C-arm 3D localization from dual 2D-3D registered radiographs was feasible and applicable for intraoperative image guidance during da Vinci robotic thoracic interventions using the proposed workflow. Tissue deformation and in vivo experiments are required before clinical evaluation of this system.

  3. Evaluation of Real-time Measurement Liver Tumor's Movement and SynchronyTM System's Accuracy of Radiosurgery using a Robot CyberKnife

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Gha Jung; Shim, Su Jung; Kim, Jeong Ho; Min, Chul Kee; Chung, Weon Kuu [Konyang University College of Medicine, Daejeon (Korea, Republic of)

    2008-12-15

    This study aimed to quantitatively measure the movement of tumors in real-time and evaluate the treatment accuracy, during the treatment of a liver tumor patient, who underwent radiosurgery with a Synchrony Respiratory motion tracking system of a robot CyberKnife. Materials and Methods: The study subjects included 24 liver tumor patients who underwent CyberKnife treatment, which included 64 times of treatment with the Synchrony Respiratory motion tracking system (SynchronyTM). The treatment involved inserting 4 to 6 acupuncture needles into the vicinity of the liver tumor in all the patients using ultrasonography as a guide. A treatment plan was set up using the CT images for treatment planning uses. The position of the acupuncture needle was identified for every treatment time by Digitally Reconstructed Radiography (DRR) prepared at the time of treatment planning and X-ray images photographed in real-time. Subsequent results were stored through a Motion Tracking System (MTS) using the Mtsmain.log treatment file. In this way, movement of the tumor was measured. Besides, the accuracy of radiosurgery using CyberKnife was evaluated by the correlation errors between the real-time positions of the acupuncture needles and the predicted coordinates. Results: The maximum and the average translational movement of the liver tumor were measured 23.5 mm and 13.9{+-}5.5 mm, respectively from the superior to the inferior direction, 3.9 mm and 1.9{+-}0.9 mm, respectively from left to right, and 8.3 mm and 4.9{+-}1.9 mm, respectively from the anterior to the posterior direction. The maximum and the average rotational movement of the liver tumor were measured to be 3.3o and 2.6{+-}1.3o, respectively for X (Left-Right) axis rotation, 4.8o and 2.3{+-}1.0o, respectively for Y (Cranio-Caudal) axis rotation, 3.9o and 2.8{+-}1.1o, respectively for Z (Anterior-Posterior) axis rotation. In addition, the average correlation error, which represents the treatment's accuracy was 1

  4. Experientally guided robots. [for planet exploration

    Science.gov (United States)

    Merriam, E. W.; Becker, J. D.

    1974-01-01

    This paper argues that an experientally guided robot is necessary to successfully explore far-away planets. Such a robot is characterized as having sense organs which receive sensory information from its environment and motor systems which allow it to interact with that environment. The sensori-motor information which it receives is organized into an experiential knowledge structure and this knowledge in turn is used to guide the robot's future actions. A summary is presented of a problem solving system which is being used as a test bed for developing such a robot. The robot currently engages in the behaviors of visual tracking, focusing down, and looking around in a simulated Martian landscape. Finally, some unsolved problems are outlined whose solutions are necessary before an experientally guided robot can be produced. These problems center around organizing the motivational and memory structure of the robot and understanding its high-level control mechanisms.

  5. Robotic Radiosurgery. Treating prostata cancer and related genitourinary applications

    International Nuclear Information System (INIS)

    Ponsky, Lee E.

    2012-01-01

    Prostate cancer is the most common cancer among North American and European men, but its treatment continues to be problematic owing to serious side-effects, including erectile dysfunction, urinary incontinence, and potential lower GI complications. Robotic radiosurgery offers a novel, rapid, non-invasive outpatient treatment option for prostate cancer that combines robotics, advanced image-guided motion detection, and automated real-time corrective spatial positioning with submillimeter precision. This book examines all aspects of the treatment of prostate cancer with robotic radiosurgery. After introductory sections on radiosurgery as a multidisciplinary practice and specific issues relating to prostate cancer, the important challenge posed by prostate motion when administering radiation therapy is examined in depth, with detailed discussion as to how image-guided robotic radiosurgery overcomes this problem by continously identifying the precise location of the prostate throughout the course of treatment. A further major section is devoted to a discussion of techniques and potential radiobiological and clinical advantages of hypofractionated radiation delivery by means of robotic radiosurgery systems. The book closes by discussing other emerging genitourinary applications of robotic radiosurgery. All of the authors are experts in their field who present a persuasive case for this fascinating technique. (orig.)

  6. Robotic Radiosurgery. Treating prostata cancer and related genitourinary applications

    Energy Technology Data Exchange (ETDEWEB)

    Ponsky, Lee E. (ed.) [Case Western Reserve University School of Medicine, Cleveland, OH (United States). University Hospitals Case Medical Center

    2012-07-01

    Prostate cancer is the most common cancer among North American and European men, but its treatment continues to be problematic owing to serious side-effects, including erectile dysfunction, urinary incontinence, and potential lower GI complications. Robotic radiosurgery offers a novel, rapid, non-invasive outpatient treatment option for prostate cancer that combines robotics, advanced image-guided motion detection, and automated real-time corrective spatial positioning with submillimeter precision. This book examines all aspects of the treatment of prostate cancer with robotic radiosurgery. After introductory sections on radiosurgery as a multidisciplinary practice and specific issues relating to prostate cancer, the important challenge posed by prostate motion when administering radiation therapy is examined in depth, with detailed discussion as to how image-guided robotic radiosurgery overcomes this problem by continously identifying the precise location of the prostate throughout the course of treatment. A further major section is devoted to a discussion of techniques and potential radiobiological and clinical advantages of hypofractionated radiation delivery by means of robotic radiosurgery systems. The book closes by discussing other emerging genitourinary applications of robotic radiosurgery. All of the authors are experts in their field who present a persuasive case for this fascinating technique. (orig.)

  7. A CORBA-Based Control Architecture for Real-Time Teleoperation Tasks in a Developmental Humanoid Robot

    Directory of Open Access Journals (Sweden)

    Hanafiah Yussof

    2011-06-01

    Full Text Available This paper presents the development of new Humanoid Robot Control Architecture (HRCA platform based on Common Object Request Broker Architecture (CORBA in a developmental biped humanoid robot for real‐time teleoperation tasks. The objective is to make the control platform open for collaborative teleoperation research in humanoid robotics via the internet. Meanwhile, to generate optimal trajectory generation in bipedal walk, we proposed a real time generation of optimal gait by using Genetic Algorithms (GA to minimize the energy for humanoid robot gait. In addition, we proposed simplification of kinematical solutions to generate controlled trajectories of humanoid robot legs in teleoperation tasks. The proposed control systems and strategies was evaluated in teleoperation experiments between Australia and Japan using humanoid robot Bonten‐Maru. Additionally, we have developed a user‐ friendly Virtual Reality (VR user interface that is composed of ultrasonic 3D mouse system and a Head Mounted Display (HMD for working coexistence of human and humanoid robot in teleoperation tasks. The teleoperation experiments show good performance of the proposed system and control, and also verified the good performance for working coexistence of human and humanoid robot.

  8. Automatic multimodal real-time tracking for image plane alignment in interventional Magnetic Resonance Imaging

    International Nuclear Information System (INIS)

    Neumann, Markus

    2014-01-01

    Interventional magnetic resonance imaging (MRI) aims at performing minimally invasive percutaneous interventions, such as tumor ablations and biopsies, under MRI guidance. During such interventions, the acquired MR image planes are typically aligned to the surgical instrument (needle) axis and to surrounding anatomical structures of interest in order to efficiently monitor the advancement in real-time of the instrument inside the patient's body. Object tracking inside the MRI is expected to facilitate and accelerate MR-guided interventions by allowing to automatically align the image planes to the surgical instrument. In this PhD thesis, an image-based work-flow is proposed and refined for automatic image plane alignment. An automatic tracking work-flow was developed, performing detection and tracking of a passive marker directly in clinical real-time images. This tracking work-flow is designed for fully automated image plane alignment, with minimization of tracking-dedicated time. Its main drawback is its inherent dependence on the slow clinical MRI update rate. First, the addition of motion estimation and prediction with a Kalman filter was investigated and improved the work-flow tracking performance. Second, a complementary optical sensor was used for multi-sensor tracking in order to decouple the tracking update rate from the MR image acquisition rate. Performance of the work-flow was evaluated with both computer simulations and experiments using an MR compatible test bed. Results show a high robustness of the multi-sensor tracking approach for dynamic image plane alignment, due to the combination of the individual strengths of each sensor. (author)

  9. [Fluoroscopy dose reduction of computed tomography guided chest interventional radiology using real-time iterative reconstruction].

    Science.gov (United States)

    Hasegawa, Hiroaki; Mihara, Yoshiyuki; Ino, Kenji; Sato, Jiro

    2014-11-01

    The purpose of this study was to evaluate the radiation dose reduction to patients and radiologists in computed tomography (CT) guided examinations for the thoracic region using CT fluoroscopy. Image quality evaluation of the real-time filtered back-projection (RT-FBP) images and the real-time adaptive iterative dose reduction (RT-AIDR) images was carried out on noise and artifacts that were considered to affect the CT fluoroscopy. The image standard deviation was improved in the fluoroscopy setting with less than 30 mA on 120 kV. With regard to the evaluation of artifact visibility and the amount generated by the needle attached to the chest phantom, there was no significant difference between the RT-FBP images with 120 kV, 20 mA and the RT-AIDR images with low-dose conditions (greater than 80 kV, 30 mA and less than 120 kV, 20 mA). The results suggest that it is possible to reduce the radiation dose by approximately 34% at the maximum using RT-AIDR while maintaining image quality equivalent to the RT-FBP images with 120 V, 20 mA.

  10. Towards All-optical Light Robotics

    DEFF Research Database (Denmark)

    Glückstad, Jesper

    In the Programmable Phase Optics (PPO) group at DTU Fotonik we pioneered the new and emerging research area of so-called Light Robotics including the new and disruptive 3D-printed micro-tools coined Wave-guided Optical Waveguides that can be real-time optically manipulated and “remote-controlled”......In the Programmable Phase Optics (PPO) group at DTU Fotonik we pioneered the new and emerging research area of so-called Light Robotics including the new and disruptive 3D-printed micro-tools coined Wave-guided Optical Waveguides that can be real-time optically manipulated and “remote......-controlled” in a volume with six-degrees-of-freedom. To be exploring the full potential of this new drone-like 3D light robotics approach in challenging microscopic geometries requires a versatile and real-time reconfigurable light coupling that can dynamically track a plurality of “light robots” in 3D to ensure...

  11. A Hierarchical Auction-Based Mechanism for Real-Time Resource Allocation in Cloud Robotic Systems.

    Science.gov (United States)

    Wang, Lujia; Liu, Ming; Meng, Max Q-H

    2017-02-01

    Cloud computing enables users to share computing resources on-demand. The cloud computing framework cannot be directly mapped to cloud robotic systems with ad hoc networks since cloud robotic systems have additional constraints such as limited bandwidth and dynamic structure. However, most multirobotic applications with cooperative control adopt this decentralized approach to avoid a single point of failure. Robots need to continuously update intensive data to execute tasks in a coordinated manner, which implies real-time requirements. Thus, a resource allocation strategy is required, especially in such resource-constrained environments. This paper proposes a hierarchical auction-based mechanism, namely link quality matrix (LQM) auction, which is suitable for ad hoc networks by introducing a link quality indicator. The proposed algorithm produces a fast and robust method that is accurate and scalable. It reduces both global communication and unnecessary repeated computation. The proposed method is designed for firm real-time resource retrieval for physical multirobot systems. A joint surveillance scenario empirically validates the proposed mechanism by assessing several practical metrics. The results show that the proposed LQM auction outperforms state-of-the-art algorithms for resource allocation.

  12. Autofluorescence lifetime imaging during transoral robotic surgery: a clinical validation study of tumor detection (Conference Presentation)

    Science.gov (United States)

    Lagarto, João. L.; Phipps, Jennifer E.; Unger, Jakob; Faller, Leta M.; Gorpas, Dimitris; Ma, Dinglong M.; Bec, Julien; Moore, Michael G.; Bewley, Arnaud F.; Yankelevich, Diego R.; Sorger, Jonathan M.; Farwell, Gregory D.; Marcu, Laura

    2017-02-01

    Autofluorescence lifetime spectroscopy is a promising non-invasive label-free tool for characterization of biological tissues and shows potential to report structural and biochemical alterations in tissue owing to pathological transformations. In particular, when combined with fiber-optic based instruments, autofluorescence lifetime measurements can enhance intraoperative diagnosis and provide guidance in surgical procedures. We investigate the potential of a fiber-optic based multi-spectral time-resolved fluorescence spectroscopy instrument to characterize the autofluorescence fingerprint associated with histologic, morphologic and metabolic changes in tissue that can provide real-time contrast between healthy and tumor regions in vivo and guide clinicians during resection of diseased areas during transoral robotic surgery. To provide immediate feedback to the surgeons, we employ tracking of an aiming beam that co-registers our point measurements with the robot camera images and allows visualization of the surgical area augmented with autofluorescence lifetime data in the surgeon's console in real-time. For each patient, autofluorescence lifetime measurements were acquired from normal, diseased and surgically altered tissue, both in vivo (pre- and post-resection) and ex vivo. Initial results indicate tumor and normal regions can be distinguished based on changes in lifetime parameters measured in vivo, when the tumor is located superficially. In particular, results show that autofluorescence lifetime of tumor is shorter than that of normal tissue (p robot assisted cancer removal interventions.

  13. Towards real-time cardiovascular magnetic resonance-guided transarterial aortic valve implantation: In vitro evaluation and modification of existing devices

    Directory of Open Access Journals (Sweden)

    Ladd Mark E

    2010-10-01

    Full Text Available Abstract Background Cardiovascular magnetic resonance (CMR is considered an attractive alternative for guiding transarterial aortic valve implantation (TAVI featuring unlimited scan plane orientation and unsurpassed soft-tissue contrast with simultaneous device visualization. We sought to evaluate the CMR characteristics of both currently commercially available transcatheter heart valves (Edwards SAPIEN™, Medtronic CoreValve® including their dedicated delivery devices and of a custom-built, CMR-compatible delivery device for the Medtronic CoreValve® prosthesis as an initial step towards real-time CMR-guided TAVI. Methods The devices were systematically examined in phantom models on a 1.5-Tesla scanner using high-resolution T1-weighted 3D FLASH, real-time TrueFISP and flow-sensitive phase-contrast sequences. Images were analyzed for device visualization quality, device-related susceptibility artifacts, and radiofrequency signal shielding. Results CMR revealed major susceptibility artifacts for the two commercial delivery devices caused by considerable metal braiding and precluding in vivo application. The stainless steel-based Edwards SAPIEN™ prosthesis was also regarded not suitable for CMR-guided TAVI due to susceptibility artifacts exceeding the valve's dimensions and hindering an exact placement. In contrast, the nitinol-based Medtronic CoreValve® prosthesis was excellently visualized with delineation even of small details and, thus, regarded suitable for CMR-guided TAVI, particularly since reengineering of its delivery device toward CMR-compatibility resulted in artifact elimination and excellent visualization during catheter movement and valve deployment on real-time TrueFISP imaging. Reliable flow measurements could be performed for both stent-valves after deployment using phase-contrast sequences. Conclusions The present study shows that the Medtronic CoreValve® prosthesis is potentially suited for real-time CMR-guided placement

  14. A Concentric Tube Continuum Robot with Piezoelectric Actuation for MRI-Guided Closed-Loop Targeting

    OpenAIRE

    Su, Hao; Li, Gang; Rucker, D. Caleb; Webster, Robert J.; Fischer, Gregory S.

    2016-01-01

    This paper presents the design, modeling and experimental evaluation of a magnetic resonance imaging (MRI)-compatible concentric tube continuum robotic system. This system enables MRI-guided deployment of a precurved and steerable concentric tube continuum mechanism, and is suitable for clinical applications where a curved trajectory is needed. This compact 6 degree-of-freedom (DOF) robotic system is piezoelectrically-actuated, and allows simultaneous robot motion and imaging with no visually...

  15. Multi-robot team design for real-world applications

    Energy Technology Data Exchange (ETDEWEB)

    Parker, L.E.

    1996-10-01

    Many of these applications are in dynamic environments requiring capabilities distributed in functionality, space, or time, and therefore often require teams of robots to work together. While much research has been done in recent years, current robotics technology is still far from achieving many of the real world applications. Two primary reasons for this technology gap are that (1) previous work has not adequately addressed the issues of fault tolerance and adaptivity in multi-robot teams, and (2) existing robotics research is often geared at specific applications and is not easily generalized to different, but related, applications. This paper addresses these issues by first describing the design issues of key importance in these real-world cooperative robotics applications: fault tolerance, reliability, adaptivity, and coherence. We then present a general architecture addressing these design issues (called ALLIANCE) that facilities multi-robot cooperation of small- to medium-sized teams in dynamic environments, performing missions composed of loosely coupled subtasks. We illustrate an implementation of ALLIANCE in a real-world application, called Bounding Overwatch, and then discuss how this architecture addresses our key design issues.

  16. Towards real-time remote processing of laparoscopic video

    Science.gov (United States)

    Ronaghi, Zahra; Duffy, Edward B.; Kwartowitz, David M.

    2015-03-01

    Laparoscopic surgery is a minimally invasive surgical technique where surgeons insert a small video camera into the patient's body to visualize internal organs and small tools to perform surgical procedures. However, the benefit of small incisions has a drawback of limited visualization of subsurface tissues, which can lead to navigational challenges in the delivering of therapy. Image-guided surgery (IGS) uses images to map subsurface structures and can reduce the limitations of laparoscopic surgery. One particular laparoscopic camera system of interest is the vision system of the daVinci-Si robotic surgical system (Intuitive Surgical, Sunnyvale, CA, USA). The video streams generate approximately 360 megabytes of data per second, demonstrating a trend towards increased data sizes in medicine, primarily due to higher-resolution video cameras and imaging equipment. Processing this data on a bedside PC has become challenging and a high-performance computing (HPC) environment may not always be available at the point of care. To process this data on remote HPC clusters at the typical 30 frames per second (fps) rate, it is required that each 11.9 MB video frame be processed by a server and returned within 1/30th of a second. The ability to acquire, process and visualize data in real-time is essential for performance of complex tasks as well as minimizing risk to the patient. As a result, utilizing high-speed networks to access computing clusters will lead to real-time medical image processing and improve surgical experiences by providing real-time augmented laparoscopic data. We aim to develop a medical video processing system using an OpenFlow software defined network that is capable of connecting to multiple remote medical facilities and HPC servers.

  17. Robotic real-time translational and rotational head motion correction during frameless stereotactic radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xinmin; Belcher, Andrew H.; Grelewicz, Zachary; Wiersma, Rodney D., E-mail: rwiersma@uchicago.edu [Department of Radiation and Cellular Oncology, The University of Chicago, Chicago, Illinois 60637 (United States)

    2015-06-15

    Purpose: To develop a control system to correct both translational and rotational head motion deviations in real-time during frameless stereotactic radiosurgery (SRS). Methods: A novel feedback control with a feed-forward algorithm was utilized to correct for the coupling of translation and rotation present in serial kinematic robotic systems. Input parameters for the algorithm include the real-time 6DOF target position, the frame pitch pivot point to target distance constant, and the translational and angular Linac beam off (gating) tolerance constants for patient safety. Testing of the algorithm was done using a 4D (XY Z + pitch) robotic stage, an infrared head position sensing unit and a control computer. The measured head position signal was processed and a resulting command was sent to the interface of a four-axis motor controller, through which four stepper motors were driven to perform motion compensation. Results: The control of the translation of a brain target was decoupled with the control of the rotation. For a phantom study, the corrected position was within a translational displacement of 0.35 mm and a pitch displacement of 0.15° 100% of the time. For a volunteer study, the corrected position was within displacements of 0.4 mm and 0.2° over 98.5% of the time, while it was 10.7% without correction. Conclusions: The authors report a control design approach for both translational and rotational head motion correction. The experiments demonstrated that control performance of the 4D robotic stage meets the submillimeter and subdegree accuracy required by SRS.

  18. Robotic real-time translational and rotational head motion correction during frameless stereotactic radiosurgery

    International Nuclear Information System (INIS)

    Liu, Xinmin; Belcher, Andrew H.; Grelewicz, Zachary; Wiersma, Rodney D.

    2015-01-01

    Purpose: To develop a control system to correct both translational and rotational head motion deviations in real-time during frameless stereotactic radiosurgery (SRS). Methods: A novel feedback control with a feed-forward algorithm was utilized to correct for the coupling of translation and rotation present in serial kinematic robotic systems. Input parameters for the algorithm include the real-time 6DOF target position, the frame pitch pivot point to target distance constant, and the translational and angular Linac beam off (gating) tolerance constants for patient safety. Testing of the algorithm was done using a 4D (XY Z + pitch) robotic stage, an infrared head position sensing unit and a control computer. The measured head position signal was processed and a resulting command was sent to the interface of a four-axis motor controller, through which four stepper motors were driven to perform motion compensation. Results: The control of the translation of a brain target was decoupled with the control of the rotation. For a phantom study, the corrected position was within a translational displacement of 0.35 mm and a pitch displacement of 0.15° 100% of the time. For a volunteer study, the corrected position was within displacements of 0.4 mm and 0.2° over 98.5% of the time, while it was 10.7% without correction. Conclusions: The authors report a control design approach for both translational and rotational head motion correction. The experiments demonstrated that control performance of the 4D robotic stage meets the submillimeter and subdegree accuracy required by SRS

  19. Reliability of EUCLIDIAN: An autonomous robotic system for image-guided prostate brachytherapy

    International Nuclear Information System (INIS)

    Podder, Tarun K.; Buzurovic, Ivan; Huang Ke; Showalter, Timothy; Dicker, Adam P.; Yu, Yan

    2011-01-01

    Purpose: Recently, several robotic systems have been developed to perform accurate and consistent image-guided brachytherapy. Before introducing a new device into clinical operations, it is important to assess the reliability and mean time before failure (MTBF) of the system. In this article, the authors present the preclinical evaluation and analysis of the reliability and MTBF of an autonomous robotic system, which is developed for prostate seed implantation. Methods: The authors have considered three steps that are important in reliability growth analysis. These steps are: Identification and isolation of failures, classification of failures, and trend analysis. For any one-of-a-kind product, the reliability enhancement is accomplished through test-fix-test. The authors have used failure mode and effect analysis for collection and analysis of reliability data by identifying and categorizing the failure modes. Failures were classified according to severity. Failures that occurred during the operation of this robotic system were considered as nonhomogenous Poisson process. The failure occurrence trend was analyzed using Laplace test. For analyzing and predicting reliability growth, commonly used and widely accepted models, Duane's model and the Army Material Systems Analysis Activity, i.e., Crow's model, were applied. The MTBF was used as an important measure for assessing the system's reliability. Results: During preclinical testing, 3196 seeds (in 53 test cases) were deposited autonomously by the robot and 14 critical failures were encountered. The majority of the failures occurred during the first few cases. The distribution of failures followed Duane's postulation as well as Crow's postulation of reliability growth. The Laplace test index was -3.82 (<0), indicating a significant trend in failure data, and the failure intervals lengthened gradually. The continuous increase in the failure occurrence interval suggested a trend toward improved reliability. The MTBF

  20. Reliability of EUCLIDIAN: An autonomous robotic system for image-guided prostate brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Podder, Tarun K.; Buzurovic, Ivan; Huang Ke; Showalter, Timothy; Dicker, Adam P.; Yu, Yan [Department of Radiation Oncology, Kimmel Cancer Center (NCI-designated), Thomas Jefferson University, Philadelphia, Pennsylvania 19107 (United States)

    2011-01-15

    Purpose: Recently, several robotic systems have been developed to perform accurate and consistent image-guided brachytherapy. Before introducing a new device into clinical operations, it is important to assess the reliability and mean time before failure (MTBF) of the system. In this article, the authors present the preclinical evaluation and analysis of the reliability and MTBF of an autonomous robotic system, which is developed for prostate seed implantation. Methods: The authors have considered three steps that are important in reliability growth analysis. These steps are: Identification and isolation of failures, classification of failures, and trend analysis. For any one-of-a-kind product, the reliability enhancement is accomplished through test-fix-test. The authors have used failure mode and effect analysis for collection and analysis of reliability data by identifying and categorizing the failure modes. Failures were classified according to severity. Failures that occurred during the operation of this robotic system were considered as nonhomogenous Poisson process. The failure occurrence trend was analyzed using Laplace test. For analyzing and predicting reliability growth, commonly used and widely accepted models, Duane's model and the Army Material Systems Analysis Activity, i.e., Crow's model, were applied. The MTBF was used as an important measure for assessing the system's reliability. Results: During preclinical testing, 3196 seeds (in 53 test cases) were deposited autonomously by the robot and 14 critical failures were encountered. The majority of the failures occurred during the first few cases. The distribution of failures followed Duane's postulation as well as Crow's postulation of reliability growth. The Laplace test index was -3.82 (<0), indicating a significant trend in failure data, and the failure intervals lengthened gradually. The continuous increase in the failure occurrence interval suggested a trend toward

  1. Real-time transfer and display of radiography image

    International Nuclear Information System (INIS)

    Liu Ximing; Wu Zhifang; Miao Jicheng

    2000-01-01

    The information process network of cobalt-60 container inspection system is a local area network based on PC. The system requires reliable transfer of radiography image between collection station and process station and the real-time display of radiography image on process station. Due to the very high data acquisition rate, in order to realize the real-time transfer and display of radiography image, 100 M Ethernet technology and network process communication technology are adopted in the system. Windows Sockets is the most common process communication technology up to now. Several kinds of process communication way under Windows Sockets technology are compared and tested. Finally the author realized 1 Mbyte/s' inerrant image transfer and real-time display with blocked datagram transfer technology

  2. Four-dimensional real-time sonographically guided cauterization of the umbilical cord in a case of twin-twin transfusion syndrome.

    Science.gov (United States)

    Timor-Tritsch, Ilan E; Rebarber, Andrei; MacKenzie, Andrew; Caglione, Christopher F; Young, Bruce K

    2003-07-01

    In the past decade, three-dimensional (3D) sonographic technology has matured from a static imaging modality to near-real-time imaging. One of the more notable improvements in this technology has been the speed with which the imaged volume is acquired and displayed. This has enabled the birth of the near-real-time or four-dimensional (4D) sonographic concept. Using the 4D feature of the current 3D sonography machines allows us to follow moving structures, such as fetal motion, in almost real time. Shortly after the emergence of 3D and 4D technology as a clinical imaging tool, its use in guiding needles into structures was explored by other investigators. We present a case in which we used the 4D feature of our sonographic equipment to follow the course and motion of an instrument inserted into the uterus to occlude the umbilical cord of a fetus in a case of twin-twin transfusion syndrome.

  3. An image guidance system for positioning robotic cochlear implant insertion tools

    Science.gov (United States)

    Bruns, Trevor L.; Webster, Robert J.

    2017-03-01

    Cochlear implants must be inserted carefully to avoid damaging the delicate anatomical structures of the inner ear. This has motivated several approaches to improve the safety and efficacy of electrode array insertion by automating the process with specialized robotic or manual insertion tools. When such tools are used, they must be positioned at the entry point to the cochlea and aligned with the desired entry vector. This paper presents an image guidance system capable of accurately positioning a cochlear implant insertion tool. An optical tracking system localizes the insertion tool in physical space while a graphical user interface incorporates this with patient- specific anatomical data to provide error information to the surgeon in real-time. Guided by this interface, novice users successfully aligned the tool with an mean accuracy of 0.31 mm.

  4. Real-Time Imaging System for the OpenPET

    Science.gov (United States)

    Tashima, Hideaki; Yoshida, Eiji; Kinouchi, Shoko; Nishikido, Fumihiko; Inadama, Naoko; Murayama, Hideo; Suga, Mikio; Haneishi, Hideaki; Yamaya, Taiga

    2012-02-01

    The OpenPET and its real-time imaging capability have great potential for real-time tumor tracking in medical procedures such as biopsy and radiation therapy. For the real-time imaging system, we intend to use the one-pass list-mode dynamic row-action maximum likelihood algorithm (DRAMA) and implement it using general-purpose computing on graphics processing units (GPGPU) techniques. However, it is difficult to make consistent reconstructions in real-time because the amount of list-mode data acquired in PET scans may be large depending on the level of radioactivity, and the reconstruction speed depends on the amount of the list-mode data. In this study, we developed a system to control the data used in the reconstruction step while retaining quantitative performance. In the proposed system, the data transfer control system limits the event counts to be used in the reconstruction step according to the reconstruction speed, and the reconstructed images are properly intensified by using the ratio of the used counts to the total counts. We implemented the system on a small OpenPET prototype system and evaluated the performance in terms of the real-time tracking ability by displaying reconstructed images in which the intensity was compensated. The intensity of the displayed images correlated properly with the original count rate and a frame rate of 2 frames per second was achieved with average delay time of 2.1 s.

  5. Real-time beam profile imaging system for actinotherapy accelerator

    International Nuclear Information System (INIS)

    Lin Yong; Wang Jingjin; Song Zheng; Zheng Putang; Wang Jianguo

    2003-01-01

    This paper describes a real-time beam profile imaging system for actinotheraphy accelerator. With the flash X-ray imager and the technique of digital image processing, a real-time 3-dimension dosage image is created from the intensity profile of the accelerator beam in real time. This system helps to obtain all the physical characters of the beam in any section plane, such as FWHM, penumbra, peak value, symmetry and homogeneity. This system has been used to acquire a 3-dimension dosage distribution of dynamic wedge modulator and the transient process of beam dosage. The system configure and the tested beam profile images are also presented

  6. Scene data fusion: Real-time standoff volumetric gamma-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Barnowski, Ross [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720, United States of America (United States); Haefner, Andrew; Mihailescu, Lucian [Lawrence Berkeley National Lab - Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720, United States of America (United States); Vetter, Kai [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720, United States of America (United States); Lawrence Berkeley National Lab - Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720, United States of America (United States)

    2015-11-11

    An approach to gamma-ray imaging has been developed that enables near real-time volumetric (3D) imaging of unknown environments thus improving the utility of gamma-ray imaging for source-search and radiation mapping applications. The approach, herein dubbed scene data fusion (SDF), is based on integrating mobile radiation imagers with real-time tracking and scene reconstruction algorithms to enable a mobile mode of operation and 3D localization of gamma-ray sources. A 3D model of the scene, provided in real-time by a simultaneous localization and mapping (SLAM) algorithm, is incorporated into the image reconstruction reducing the reconstruction time and improving imaging performance. The SDF concept is demonstrated in this work with a Microsoft Kinect RGB-D sensor, a real-time SLAM solver, and a cart-based Compton imaging platform comprised of two 3D position-sensitive high purity germanium (HPGe) detectors. An iterative algorithm based on Compton kinematics is used to reconstruct the gamma-ray source distribution in all three spatial dimensions. SDF advances the real-world applicability of gamma-ray imaging for many search, mapping, and verification scenarios by improving the tractiblity of the gamma-ray image reconstruction and providing context for the 3D localization of gamma-ray sources within the environment in real-time.

  7. Real-time visualization and quantification of retrograde cardioplegia delivery using near infrared fluorescent imaging.

    Science.gov (United States)

    Rangaraj, Aravind T; Ghanta, Ravi K; Umakanthan, Ramanan; Soltesz, Edward G; Laurence, Rita G; Fox, John; Cohn, Lawrence H; Bolman, R M; Frangioni, John V; Chen, Frederick Y

    2008-01-01

    Homogeneous delivery of cardioplegia is essential for myocardial protection during cardiac surgery. Presently, there exist no established methods to quantitatively assess cardioplegia distribution intraoperatively and determine when retrograde cardioplegia is required. In this study, we evaluate the feasibility of near infrared (NIR) imaging for real-time visualization of cardioplegia distribution in a porcine model. A portable, intraoperative, real-time NIR imaging system was utilized. NIR fluorescent cardioplegia solution was developed by incorporating indocyanine green (ICG) into crystalloid cardioplegia solution. Real-time NIR imaging was performed while the fluorescent cardioplegia solution was infused via the retrograde route in five ex vivo normal porcine hearts and in five ex vivo porcine hearts status post left anterior descending (LAD) coronary artery ligation. Horizontal cross-sections of the hearts were obtained at proximal, middle, and distal LAD levels. Videodensitometry was performed to quantify distribution of fluorophore content. The progressive distribution of cardioplegia was clearly visualized with NIR imaging. Complete visualization of retrograde distribution occurred within 4 minutes of infusion. Videodensitometry revealed retrograde cardioplegia, primarily distributed to the left ventricle (LV) and anterior septum. In hearts with LAD ligation, antegrade cardioplegia did not distribute to the anterior LV. This deficiency was compensated for with retrograde cardioplegia supplementation. Incorporation of ICG into cardioplegia allows real-time visualization of cardioplegia delivery via NIR imaging. This technology may prove useful in guiding intraoperative decisions pertaining to when retrograde cardioplegia is mandated.

  8. Real-time non-rigid target tracking for ultrasound-guided clinical interventions

    Science.gov (United States)

    Zachiu, C.; Ries, M.; Ramaekers, P.; Guey, J.-L.; Moonen, C. T. W.; de Senneville, B. Denis

    2017-10-01

     ˜1.5 mm and submillimeter precision. This, together with a computational performance of 20 images per second make the proposed method an attractive solution for real-time target tracking during US-guided clinical interventions.

  9. Real-time progressive hyperspectral image processing endmember finding and anomaly detection

    CERN Document Server

    Chang, Chein-I

    2016-01-01

    The book covers the most crucial parts of real-time hyperspectral image processing: causality and real-time capability. Recently, two new concepts of real time hyperspectral image processing, Progressive Hyperspectral Imaging (PHSI) and Recursive Hyperspectral Imaging (RHSI). Both of these can be used to design algorithms and also form an integral part of real time hyperpsectral image processing. This book focuses on progressive nature in algorithms on their real-time and causal processing implementation in two major applications, endmember finding and anomaly detection, both of which are fundamental tasks in hyperspectral imaging but generally not encountered in multispectral imaging. This book is written to particularly address PHSI in real time processing, while a book, Recursive Hyperspectral Sample and Band Processing: Algorithm Architecture and Implementation (Springer 2016) can be considered as its companion book. Includes preliminary background which is essential to those who work in hyperspectral ima...

  10. A cost-effective intelligent robotic system with dual-arm dexterous coordination and real-time vision

    Science.gov (United States)

    Marzwell, Neville I.; Chen, Alexander Y. K.

    1991-01-01

    Dexterous coordination of manipulators based on the use of redundant degrees of freedom, multiple sensors, and built-in robot intelligence represents a critical breakthrough in development of advanced manufacturing technology. A cost-effective approach for achieving this new generation of robotics has been made possible by the unprecedented growth of the latest microcomputer and network systems. The resulting flexible automation offers the opportunity to improve the product quality, increase the reliability of the manufacturing process, and augment the production procedures for optimizing the utilization of the robotic system. Moreover, the Advanced Robotic System (ARS) is modular in design and can be upgraded by closely following technological advancements as they occur in various fields. This approach to manufacturing automation enhances the financial justification and ensures the long-term profitability and most efficient implementation of robotic technology. The new system also addresses a broad spectrum of manufacturing demand and has the potential to address both complex jobs as well as highly labor-intensive tasks. The ARS prototype employs the decomposed optimization technique in spatial planning. This technique is implemented to the framework of the sensor-actuator network to establish the general-purpose geometric reasoning system. The development computer system is a multiple microcomputer network system, which provides the architecture for executing the modular network computing algorithms. The knowledge-based approach used in both the robot vision subsystem and the manipulation control subsystems results in the real-time image processing vision-based capability. The vision-based task environment analysis capability and the responsive motion capability are under the command of the local intelligence centers. An array of ultrasonic, proximity, and optoelectronic sensors is used for path planning. The ARS currently has 18 degrees of freedom made up by two

  11. Real-time imaging of quantum entanglement.

    Science.gov (United States)

    Fickler, Robert; Krenn, Mario; Lapkiewicz, Radek; Ramelow, Sven; Zeilinger, Anton

    2013-01-01

    Quantum Entanglement is widely regarded as one of the most prominent features of quantum mechanics and quantum information science. Although, photonic entanglement is routinely studied in many experiments nowadays, its signature has been out of the grasp for real-time imaging. Here we show that modern technology, namely triggered intensified charge coupled device (ICCD) cameras are fast and sensitive enough to image in real-time the effect of the measurement of one photon on its entangled partner. To quantitatively verify the non-classicality of the measurements we determine the detected photon number and error margin from the registered intensity image within a certain region. Additionally, the use of the ICCD camera allows us to demonstrate the high flexibility of the setup in creating any desired spatial-mode entanglement, which suggests as well that visual imaging in quantum optics not only provides a better intuitive understanding of entanglement but will improve applications of quantum science.

  12. Issues in image-guided therapy.

    OpenAIRE

    Haigron , Pascal; Luo , Limin ,; Coatrieux , Jean-Louis

    2009-01-01

    International audience; Medical robotics, computer- assisted surgery (CAS), image-guided therapy (IGT), and the like emerged more than 20 years ago, and many advances have been made since. Conferences and workshops have been organized; scientific contributions, position papers, and patents have been published; new academic societies have been launched; and companies were created all over the world to propose methods, devices, and systems in the area. Researchers in robotics, computer vision a...

  13. Toward a real-time system for temporal enhanced ultrasound-guided prostate biopsy.

    Science.gov (United States)

    Azizi, Shekoofeh; Van Woudenberg, Nathan; Sojoudi, Samira; Li, Ming; Xu, Sheng; Abu Anas, Emran M; Yan, Pingkun; Tahmasebi, Amir; Kwak, Jin Tae; Turkbey, Baris; Choyke, Peter; Pinto, Peter; Wood, Bradford; Mousavi, Parvin; Abolmaesumi, Purang

    2018-03-27

    We have previously proposed temporal enhanced ultrasound (TeUS) as a new paradigm for tissue characterization. TeUS is based on analyzing a sequence of ultrasound data with deep learning and has been demonstrated to be successful for detection of cancer in ultrasound-guided prostate biopsy. Our aim is to enable the dissemination of this technology to the community for large-scale clinical validation. In this paper, we present a unified software framework demonstrating near-real-time analysis of ultrasound data stream using a deep learning solution. The system integrates ultrasound imaging hardware, visualization and a deep learning back-end to build an accessible, flexible and robust platform. A client-server approach is used in order to run computationally expensive algorithms in parallel. We demonstrate the efficacy of the framework using two applications as case studies. First, we show that prostate cancer detection using near-real-time analysis of RF and B-mode TeUS data and deep learning is feasible. Second, we present real-time segmentation of ultrasound prostate data using an integrated deep learning solution. The system is evaluated for cancer detection accuracy on ultrasound data obtained from a large clinical study with 255 biopsy cores from 157 subjects. It is further assessed with an independent dataset with 21 biopsy targets from six subjects. In the first study, we achieve area under the curve, sensitivity, specificity and accuracy of 0.94, 0.77, 0.94 and 0.92, respectively, for the detection of prostate cancer. In the second study, we achieve an AUC of 0.85. Our results suggest that TeUS-guided biopsy can be potentially effective for the detection of prostate cancer.

  14. Real-time solar magnetograph operation system software design and user's guide

    Science.gov (United States)

    Wang, C.

    1984-01-01

    The Real Time Solar Magnetograph (RTSM) Operation system software design on PDP11/23+ is presented along with the User's Guide. The RTSM operation software is for real time instrumentation control, data collection and data management. The data is used for vector analysis, plotting or graphics display. The processed data is then easily compared with solar data from other sources, such as the Solar Maximum Mission (SMM).

  15. Classification and overview of research in real-time imaging

    Science.gov (United States)

    Sinha, Purnendu; Gorinsky, Sergey V.; Laplante, Phillip A.; Stoyenko, Alexander D.; Marlowe, Thomas J.

    1996-10-01

    Real-time imaging has application in areas such as multimedia, virtual reality, medical imaging, and remote sensing and control. Recently, the imaging community has witnessed a tremendous growth in research and new ideas in these areas. To lend structure to this growth, we outline a classification scheme and provide an overview of current research in real-time imaging. For convenience, we have categorized references by research area and application.

  16. Real-time SPARSE-SENSE cardiac cine MR imaging: optimization of image reconstruction and sequence validation.

    Science.gov (United States)

    Goebel, Juliane; Nensa, Felix; Bomas, Bettina; Schemuth, Haemi P; Maderwald, Stefan; Gratz, Marcel; Quick, Harald H; Schlosser, Thomas; Nassenstein, Kai

    2016-12-01

    Improved real-time cardiac magnetic resonance (CMR) sequences have currently been introduced, but so far only limited practical experience exists. This study aimed at image reconstruction optimization and clinical validation of a new highly accelerated real-time cine SPARSE-SENSE sequence. Left ventricular (LV) short-axis stacks of a real-time free-breathing SPARSE-SENSE sequence with high spatiotemporal resolution and of a standard segmented cine SSFP sequence were acquired at 1.5 T in 11 volunteers and 15 patients. To determine the optimal iterations, all volunteers' SPARSE-SENSE images were reconstructed using 10-200 iterations, and contrast ratios, image entropies, and reconstruction times were assessed. Subsequently, the patients' SPARSE-SENSE images were reconstructed with the clinically optimal iterations. LV volumetric values were evaluated and compared between both sequences. Sufficient image quality and acceptable reconstruction times were achieved when using 80 iterations. Bland-Altman plots and Passing-Bablok regression showed good agreement for all volumetric parameters. 80 iterations are recommended for iterative SPARSE-SENSE image reconstruction in clinical routine. Real-time cine SPARSE-SENSE yielded comparable volumetric results as the current standard SSFP sequence. Due to its intrinsic low image acquisition times, real-time cine SPARSE-SENSE imaging with iterative image reconstruction seems to be an attractive alternative for LV function analysis. • A highly accelerated real-time CMR sequence using SPARSE-SENSE was evaluated. • SPARSE-SENSE allows free breathing in real-time cardiac cine imaging. • For clinically optimal SPARSE-SENSE image reconstruction, 80 iterations are recommended. • Real-time SPARSE-SENSE imaging yielded comparable volumetric results as the reference SSFP sequence. • The fast SPARSE-SENSE sequence is an attractive alternative to standard SSFP sequences.

  17. Real-time movie image enhancement in NMR

    International Nuclear Information System (INIS)

    Doyle, M.; Mansfield, P.

    1986-01-01

    Clinical NMR motion picture (movie) images can now be produced routinely in real-time by ultra-high-speed echo-planar imaging (EPI). The single-shot image quality depends on both pixel resolution and signal-to-noise ratio (S/N), both factors being intertradeable. If image S/N is sacrificed rather than resolution, it is shown that S/N may be greatly enhanced subsequently without vitiating spatial resolution or foregoing real motional effects when the object motion is periodic. This is achieved by a Fourier filtering process. Experimental results are presented which demonstrate the technique for a normal functioning heart. (author)

  18. Real-time virtual sonography (RVS)-guided vacuum-assisted breast biopsy for lesions initially detected with breast MRI.

    Science.gov (United States)

    Uematsu, Takayoshi

    2013-12-01

    To report on our initial experiences with a new method of real-time virtual sonography (RVS)-guided 11-gauge vacuum-assisted breast biopsy for lesions that were initially detected with breast MRI. RVS-guided 11-gauge vacuum-assisted biopsy is performed when a lesion with suspicious characteristics is initially detected with breast MRI and is occult on mammography, sonography, and physical examination. Live sonographic images were co-registered to the previously loaded second-look spine contrast-enhanced breast MRI volume data to correlate the sonography and MR images. Six lesions were examined in six consecutive patients scheduled to undergo RVS-guided 11-gauge vacuum-assisted biopsy. One patient was removed from the study because of non-visualization of the lesion in the second-look spine contrast-enhanced breast MRI. Five patients with non-mass enhancement lesions were biopsied. The lesions ranged in size from 9 to 13 mm (mean 11 mm). The average procedural time, including the sonography and MR image co-registration time, was 25 min. All biopsies resulted in tissue retrieval. One was fibroadenomatous nodules, and those of four were fibrocystic changes. There were no complications during or after the procedures. RVS-guided 11-gauge vacuum-assisted breast biopsies provide a safe and effective method for the examination of suspicious lesions initially detected with MRI.

  19. Matching-range-constrained real-time loop closure detection with CNNs features.

    Science.gov (United States)

    Bai, Dongdong; Wang, Chaoqun; Zhang, Bo; Yi, Xiaodong; Tang, Yuhua

    2016-01-01

    The loop closure detection (LCD) is an essential part of visual simultaneous localization and mapping systems (SLAM). LCD is capable of identifying and compensating the accumulation drift of localization algorithms to produce an consistent map if the loops are checked correctly. Deep convolutional neural networks (CNNs) have outperformed state-of-the-art solutions that use traditional hand-crafted features in many computer vision and pattern recognition applications. After the great success of CNNs, there has been much interest in applying CNNs features to robotic fields such as visual LCD. Some researchers focus on using a pre-trained CNNs model as a method of generating an image representation appropriate for visual loop closure detection in SLAM. However, there are many fundamental differences and challenges involved in character between simple computer vision applications and robotic applications. Firstly, the adjacent images in the dataset of loop closure detection might have more resemblance than the images that form the loop closure. Secondly, real-time performance is one of the most critical demands for robots. In this paper, we focus on making use of the feature generated by CNNs layers to implement LCD in real environment. In order to address the above challenges, we explicitly provide a value to limit the matching range of images to solve the first problem; meanwhile we get better results than state-of-the-art methods and improve the real-time performance using an efficient feature compression method.

  20. MRI-guided robotic system for transperineal prostate interventions: proof of principle

    International Nuclear Information System (INIS)

    Van den Bosch, Michiel R; Moman, Maaike R; Van Vulpen, Marco; Battermann, Jan J; Lagendijk, Jan J W; Moerland, Marinus A; Duiveman, Ed; Van Schelven, Leonard J; De Leeuw, Hendrik

    2010-01-01

    In this study, we demonstrate the proof of principle of the University Medical Center Utrecht (UMCU) robot dedicated to magnetic resonance imaging (MRI)-guided interventions in patients. The UMCU robot consists of polymers and non-ferromagnetic materials. For transperineal prostate interventions, it can be placed between the patient's legs inside a closed bore 1.5T MR scanner. The robot can manually be translated and rotated resulting in five degrees of freedom. It contains a pneumatically driven tapping device to automatically insert a needle stepwise into the prostate using a controller unit outside the scanning room. To define the target positions and to verify the needle insertion point and the needle trajectory, a high-resolution 3D balanced steady state free precession (bSSFP) scan that provides a T2/T1-weighted contrast is acquired. During the needle insertion fast 2D bSSFP images are generated to track the needle on-line. When the target position is reached, the radiation oncologist manually places a fiducial gold marker (small seed) at this location. In total two needle trajectories are used to place all markers. Afterwards, a high-resolution 3D bSSFP scan is acquired to visualize the fiducial gold markers. Four fiducial gold markers were placed transperineally into the prostate of a patient with a clinical stage T3 prostate cancer. In the generated scans, it was possible to discriminate the patient's anatomy, the needle and the markers. All markers were delivered inside the prostate. The procedure time was 1.5 h. This study proves that MRI-guided needle placement and seed delivery in the prostate with the UMCU robot are feasible. (note)

  1. MRI-guided robotic system for transperineal prostate interventions: proof of principle

    Energy Technology Data Exchange (ETDEWEB)

    Van den Bosch, Michiel R; Moman, Maaike R; Van Vulpen, Marco; Battermann, Jan J; Lagendijk, Jan J W; Moerland, Marinus A [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands); Duiveman, Ed; Van Schelven, Leonard J [Medical Technology and Clinical Physics, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands); De Leeuw, Hendrik [Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands)], E-mail: M.R.vandenBosch@umcutrecht.nl

    2010-03-07

    In this study, we demonstrate the proof of principle of the University Medical Center Utrecht (UMCU) robot dedicated to magnetic resonance imaging (MRI)-guided interventions in patients. The UMCU robot consists of polymers and non-ferromagnetic materials. For transperineal prostate interventions, it can be placed between the patient's legs inside a closed bore 1.5T MR scanner. The robot can manually be translated and rotated resulting in five degrees of freedom. It contains a pneumatically driven tapping device to automatically insert a needle stepwise into the prostate using a controller unit outside the scanning room. To define the target positions and to verify the needle insertion point and the needle trajectory, a high-resolution 3D balanced steady state free precession (bSSFP) scan that provides a T2/T1-weighted contrast is acquired. During the needle insertion fast 2D bSSFP images are generated to track the needle on-line. When the target position is reached, the radiation oncologist manually places a fiducial gold marker (small seed) at this location. In total two needle trajectories are used to place all markers. Afterwards, a high-resolution 3D bSSFP scan is acquired to visualize the fiducial gold markers. Four fiducial gold markers were placed transperineally into the prostate of a patient with a clinical stage T3 prostate cancer. In the generated scans, it was possible to discriminate the patient's anatomy, the needle and the markers. All markers were delivered inside the prostate. The procedure time was 1.5 h. This study proves that MRI-guided needle placement and seed delivery in the prostate with the UMCU robot are feasible. (note)

  2. Design and validation of a CT-guided robotic system for lung cancer brachytherapy.

    Science.gov (United States)

    Dou, Huaisu; Jiang, Shan; Yang, Zhiyong; Sun, Luqing; Ma, Xiaodong; Huo, Bin

    2017-09-01

    Currently, lung brachytherapy in clinical setting is a complex procedure. Operation accuracy depends on accurate positioning of the template; however, it is difficult to guarantee the positioning accuracy manually. Application of robotic-assisted systems can simplify the procedure and improve the manual positioning accuracy. Therefore, a novel CT-guided robotic system was developed to assist the lung cancer brachytherapy. A four degree-of-freedom (DOF) robot, controlled by a lung brachytherapy treatment planning system (TPS) software, was designed and manufactured to assist the template positioning. Target position of the template can be obtained from the treatment plan, thus the robot is driven to the target position automatically. The robotic system was validated in both the laboratory and the CT environment. In laboratory environment, a 3D laser tracker and an inertial measurement unit (IMU) were used to measure the mechanical accuracy in air, which includes positioning accuracy and position repeatability. Working reliability was also validated in this procedure by observing the response reliability and calculating the position repeatability. Imaging artifacts and accuracy of the robot registration were validated in the CT environment by using an artificial phantom with fiducial markers. CT images were obtained and used to test the image artifact and calculate the registration accuracy. Phantom experiments were conducted to test the accuracy of needle insertion by using a transparent hydrogel phantom with a high imitation artificial phantom. Also, the efficiency was validated in this procedure by comparing time costs in manual positioning with robotic positioning under the same experimental conditions. The robotic system achieved the positioning accuracy of 0.28 ± 0.25 mm and the position repeatability of 0.09 ± 0.11 mm. Experimental results showed that the robot was CT-compatible and responded reliably to the control commands. The mean registration accuracy

  3. Image-guided neurosurgery. Global concept of a surgical tele-assistance using obstacle detection robotics

    International Nuclear Information System (INIS)

    Desgeorges, M.; Bellegou, N.; Faillot, Th.; Cordoliani, Y.S.; Dutertre, G.; Blondet, E.; Soultrait, F. de; Boissy, J.M.

    2000-01-01

    Surgical tele-assistance significantly increases accuracy of surgical gestures, especially in the case of brain tumor neurosurgery. The robotic device is tele-operated through a microscope and the surgeon's gestures are guided by real-time overlaying of the X-ray imagery in the microscope. During the device's progression inside the brain, the focus is ensured by the microscope auto-focus feature. The surgeon can thus constantly check his position on the field workstation. Obstacles to avoid or dangerous areas can be previewed in the operation field. This system is routinely used for 5 years in the neurosurgery division of the Val de Grace hospital. More than 400 brain surgery operations have been done using it. An adaptation is used for rachis surgery. Other military hospitals begin to be equipped with similar systems. It will be possible to link them for data transfer. When it will be operational, such a network it will show what could be, in the future, a medical/surgical remote-assistance system designed to take care of wounded/critical conditions people, including assistance to surgical gestures. (authors)

  4. Non-real-time computed tomography-guided percutaneous ethanol injection therapy for heapocellular carcinoma undetectable by ultrasonography

    International Nuclear Information System (INIS)

    Ueda, Kazushige; Ohkawara, Tohru; Minami, Masahito; Sawa, Yoshihiko; Morinaga, Osamu; Kohli, Yoshihiro; Ohkawara, Yasuo

    1998-01-01

    The purpose of this study was to evaluate the feasibility of non-real-time CT-guided percutaneous ethanol injection therapy (PEIT) for hepatocellular carcinoma (HCC, 37 lesions) untreatable by ultrasonography-guided (US)-PEIT. The HCC lesion was localized on the lipiodol CT image with a graduated grid system. We advanced a 21 G or 22 G needle in a stepwise fashion with intermittent localization scans using a tandem method to position the tip of the needle in the lesion. Ethanol containing contrast medium was injected with monitoring scans obtained after incremental volumes of injection, until perfusion of the lesion was judged to be complete. A total of 44 CT-PEIT procedures were performed. The average number of needle passes from the skin to the liver in each CT-PEIT procedure was 2.3, the average amount of ethanol injected was 14.4 ml, and the average time required was 49.3 minutes. Complete perfusion of the lesion by ethanol on monitoring CT images was achieved in all lesions with only a single or double CT-PEIT procedure without severe complication. Local recurrence was detected only in 5 lesions. At present, it is more time-consuming to perform CT-PEIT than US-PEIT because conventional CT guidance is not real-time imaging. However, it is expected that this limitation of CT-PEIT will be overcome in the near future with the introduction of CT fluoroscopy. In conclusion, CT-PEIT should prove to be a feasible, acceptable treatment for challenging cases of HCC undetectable by US. (author)

  5. Design and Performance Evaluation of Real-time Endovascular Interventional Surgical Robotic System with High Accuracy.

    Science.gov (United States)

    Wang, Kundong; Chen, Bing; Lu, Qingsheng; Li, Hongbing; Liu, Manhua; Shen, Yu; Xu, Zhuoyan

    2018-05-15

    Endovascular interventional surgery (EIS) is performed under a high radiation environment at the sacrifice of surgeons' health. This paper introduces a novel endovascular interventional surgical robot that aims to reduce radiation to surgeons and physical stress imposed by lead aprons during fluoroscopic X-ray guided catheter intervention. The unique mechanical structure allowed the surgeon to manipulate the axial and radial motion of the catheter and guide wire. Four catheter manipulators (to manipulate the catheter and guide wire), and a control console which consists of four joysticks, several buttons and two twist switches (to control the catheter manipulators) were presented. The entire robotic system was established on a master-slave control structure through CAN (Controller Area Network) bus communication, meanwhile, the slave side of this robotic system showed highly accurate control over velocity and displacement with PID controlling method. The robotic system was tested and passed in vitro and animal experiments. Through functionality evaluation, the manipulators were able to complete interventional surgical motion both independently and cooperatively. The robotic surgery was performed successfully in an adult female pig and demonstrated the feasibility of superior mesenteric and common iliac artery stent implantation. The entire robotic system met the clinical requirements of EIS. The results show that the system has the ability to imitate the movements of surgeons and to accomplish the axial and radial motions with consistency and high-accuracy. Copyright © 2018 John Wiley & Sons, Ltd.

  6. Cf-252 based neutron radiography using real-time image processing system

    International Nuclear Information System (INIS)

    Mochiki, Koh-ichi; Koiso, Manabu; Yamaji, Akihiro; Iwata, Hideki; Kihara, Yoshitaka; Sano, Shigeru; Murata, Yutaka

    2001-01-01

    For compact Cf-252 based neutron radiography, a real-time image processing system by particle counting technique has been developed. The electronic imaging system consists of a supersensitive imaging camera, a real-time corrector, a real-time binary converter, a real-time calculator for centroid, a display monitor and a computer. Three types of accumulated NR image; ordinary, binary and centroid images, can be observed during a measurement. Accumulated NR images were taken by the centroid mode, the binary mode and ordinary mode using of Cf-252 neutron source and those images were compared. The centroid mode presented the sharpest image and its statistical characteristics followed the Poisson distribution, while the ordinary mode showed the smoothest image as the averaging effect by particle bright spots with distributed brightness was most dominant. (author)

  7. Real Time Localization for Mobile Robot

    Czech Academy of Sciences Publication Activity Database

    Věchet, S.; Krejsa, Jiří

    2005-01-01

    Roč. 12, A 1 (2005), s. 3-10 ISSN 1210-2717. [Mechatronics, Robotics and Biomechanics 2005. Třešť, 26.09.2005-29.09.2005] Institutional research plan: CEZ:AV0Z20760514 Keywords : localization * mobile robot Subject RIV: JD - Computer Applications, Robotics

  8. A cadaver study of mastoidectomy using an image-guided human-robot collaborative control system.

    Science.gov (United States)

    Yoo, Myung Hoon; Lee, Hwan Seo; Yang, Chan Joo; Lee, Seung Hwan; Lim, Hoon; Lee, Seongpung; Yi, Byung-Ju; Chung, Jong Woo

    2017-10-01

    Surgical precision would be better achieved with the development of an anatomical monitoring and controlling robot system than by traditional surgery techniques alone. We evaluated the feasibility of robot-assisted mastoidectomy in terms of duration, precision, and safety. Human cadaveric study. We developed a multi-degree-of-freedom robot system for a surgical drill with a balancing arm. The drill system is manipulated by the surgeon, the motion of the drill burr is monitored by the image-guided system, and the brake is controlled by the robotic system. The system also includes an alarm as well as the brake to help avoid unexpected damage to vital structures. Experimental mastoidectomy was performed in 11 temporal bones of six cadavers. Parameters including duration and safety were assessed, as well as intraoperative damage, which was judged via pre- and post-operative computed tomography. The duration of mastoidectomy in our study was comparable with that required for chronic otitis media patients. Although minor damage, such as dura exposure without tearing, was noted, no critical damage to the facial nerve or other important structures was observed. When the brake system was set to 1 mm from the facial nerve, the postoperative average bone thicknesses of the facial nerve was 1.39, 1.41, 1.22, 1.41, and 1.55 mm in the lateral, posterior pyramidal and anterior, lateral, and posterior mastoid portions, respectively. Mastoidectomy can be successfully performed using our robot-assisted system while maintaining a pre-set limit of 1 mm in most cases. This system may thus be useful for more inexperienced surgeons. NA.

  9. Evolution of Collective Behaviors for a Real Swarm of Aquatic Surface Robots.

    Science.gov (United States)

    Duarte, Miguel; Costa, Vasco; Gomes, Jorge; Rodrigues, Tiago; Silva, Fernando; Oliveira, Sancho Moura; Christensen, Anders Lyhne

    2016-01-01

    Swarm robotics is a promising approach for the coordination of large numbers of robots. While previous studies have shown that evolutionary robotics techniques can be applied to obtain robust and efficient self-organized behaviors for robot swarms, most studies have been conducted in simulation, and the few that have been conducted on real robots have been confined to laboratory environments. In this paper, we demonstrate for the first time a swarm robotics system with evolved control successfully operating in a real and uncontrolled environment. We evolve neural network-based controllers in simulation for canonical swarm robotics tasks, namely homing, dispersion, clustering, and monitoring. We then assess the performance of the controllers on a real swarm of up to ten aquatic surface robots. Our results show that the evolved controllers transfer successfully to real robots and achieve a performance similar to the performance obtained in simulation. We validate that the evolved controllers display key properties of swarm intelligence-based control, namely scalability, flexibility, and robustness on the real swarm. We conclude with a proof-of-concept experiment in which the swarm performs a complete environmental monitoring task by combining multiple evolved controllers.

  10. Rancang Bangun Sistem Transfer Video RealTime Menggunakan Jaringan Mobile Ad hoc Pada Robot ITS - 01

    Directory of Open Access Journals (Sweden)

    Yorisan Permana Baginda

    2013-03-01

    Full Text Available Pengembangan robot monitoring dan surveylance banyak dikembangkan untuk memenuhi kebutuhan keamanan, pertahanan. Robot tersebut dihubungkan menggunakan standar wireless-LAN 802.11g sebagai media komunikasi data. Implementasi jaringan adhoc ditujukan untuk memberdayakan masing-masing robot dapat berfungsi sebagai router, sehingga memungkinkan untuk melakukan komunikasi multihop dari robot satu yang terdekat dengan stasiun pengontrol ke robot selanjutnya yang sudah keluar jangkauan stasiun pengontrol. Dengan demikian cakupan operasi robot monitoring menjadi semakin luas. Sistem transfer video real-time dalam pelenelitian ini dibangun menggunakan KinectTCP yang di modifikasi untuk menerima data dari robot berupa RGB,Depth dan Skeleton. Evaluasi performansi  dilakukan pada topologi single hop dan multihop dengan mengukur parameter throughput, FPS, packet loss, packet delay dan jitter. Hasil pengujian menunjukkan topologi multihop dapat mendukung operasional robot dalam gedung meskipun mengalami penurunan  parameter Throughput dan FPS rata-rata sebesar 82,86%, dan mengalami pertambahan delay rata-rata sebesar 149,73%.

  11. TH-AB-202-05: BEST IN PHYSICS (JOINT IMAGING-THERAPY): First Online Ultrasound-Guided MLC Tracking for Real-Time Motion Compensation in Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Ipsen, S; Bruder, R; Schweikard, A [University of Luebeck, Luebeck, DE (United States); O’Brien, R; Keall, P [University of Sydney, Sydney (Australia); Poulsen, P [Aarhus University Hospital, Aarhus (Denmark)

    2016-06-15

    Purpose: While MLC tracking has been successfully used for motion compensation of moving targets, current real-time target localization methods rely on correlation models with x-ray imaging or implanted electromagnetic transponders rather than direct target visualization. In contrast, ultrasound imaging yields volumetric data in real-time (4D) without ionizing radiation. We report the first results of online 4D ultrasound-guided MLC tracking in a phantom. Methods: A real-time tracking framework was installed on a 4D ultrasound station (Vivid7 dimension, GE) and used to detect a 2mm spherical lead marker inside a water tank. The volumetric frame rate was 21.3Hz (47ms). The marker was rigidly attached to a motion stage programmed to reproduce nine tumor trajectories (five prostate, four lung). The 3D marker position from ultrasound was used for real-time MLC aperture adaption. The tracking system latency was measured and compensated by prediction for lung trajectories. To measure geometric accuracy, anterior and lateral conformal fields with 10cm circular aperture were delivered for each trajectory. The tracking error was measured as the difference between marker position and MLC aperture in continuous portal imaging. For dosimetric evaluation, 358° VMAT fields were delivered to a biplanar diode array dosimeter using the same trajectories. Dose measurements with and without MLC tracking were compared to a static reference dose using a 3%/3 mm γ-test. Results: The tracking system latency was 170ms. The mean root-mean-square tracking error was 1.01mm (0.75mm prostate, 1.33mm lung). Tracking reduced the mean γ-failure rate from 13.9% to 4.6% for prostate and from 21.8% to 0.6% for lung with high-modulation VMAT plans and from 5% (prostate) and 18% (lung) to 0% with low modulation. Conclusion: Real-time ultrasound tracking was successfully integrated with MLC tracking for the first time and showed similar accuracy and latency as other methods while holding the

  12. Finding NEMO (novel electromaterial muscle oscillator): a polypyrrole powered robotic fish with real-time wireless speed and directional control

    International Nuclear Information System (INIS)

    McGovern, Scott; Alici, Gursel; Spinks, Geoffrey; Truong, Van-Tan

    2009-01-01

    This paper presents the development of an autonomously powered and controlled robotic fish that incorporates an active flexural joint tail fin, activated through conducting polymer actuators based on polypyrrole (PPy). The novel electromaterial muscle oscillator (NEMO) tail fin assembly on the fish could be controlled wirelessly in real time by varying the frequency and duty cycle of the voltage signal supplied to the PPy bending-type actuators. Directional control was achieved by altering the duty cycle of the voltage input to the NEMO tail fin, which shifted the axis of oscillation and enabled turning of the robotic fish. At low speeds, the robotic fish had a turning circle as small as 15 cm (or 1.1 body lengths) in radius. The highest speed of the fish robot was estimated to be approximately 33 mm s −1 (or 0.25 body lengths s −1 ) and was achieved with a flapping frequency of 0.6–0.8 Hz which also corresponded with the most hydrodynamically efficient mode for tail fin operation. This speed is approximately ten times faster than those for any previously reported artificial muscle based device that also offers real-time speed and directional control. This study contributes to previously published studies on bio-inspired functional devices, demonstrating that electroactive polymer actuators can be real alternatives to conventional means of actuation such as electric motors

  13. Finding NEMO (novel electromaterial muscle oscillator): a polypyrrole powered robotic fish with real-time wireless speed and directional control

    Science.gov (United States)

    McGovern, Scott; Alici, Gursel; Truong, Van-Tan; Spinks, Geoffrey

    2009-09-01

    This paper presents the development of an autonomously powered and controlled robotic fish that incorporates an active flexural joint tail fin, activated through conducting polymer actuators based on polypyrrole (PPy). The novel electromaterial muscle oscillator (NEMO) tail fin assembly on the fish could be controlled wirelessly in real time by varying the frequency and duty cycle of the voltage signal supplied to the PPy bending-type actuators. Directional control was achieved by altering the duty cycle of the voltage input to the NEMO tail fin, which shifted the axis of oscillation and enabled turning of the robotic fish. At low speeds, the robotic fish had a turning circle as small as 15 cm (or 1.1 body lengths) in radius. The highest speed of the fish robot was estimated to be approximately 33 mm s-1 (or 0.25 body lengths s-1) and was achieved with a flapping frequency of 0.6-0.8 Hz which also corresponded with the most hydrodynamically efficient mode for tail fin operation. This speed is approximately ten times faster than those for any previously reported artificial muscle based device that also offers real-time speed and directional control. This study contributes to previously published studies on bio-inspired functional devices, demonstrating that electroactive polymer actuators can be real alternatives to conventional means of actuation such as electric motors.

  14. Distributed management system of a scanning robot programmed real time in APL language

    International Nuclear Information System (INIS)

    Liabot, M.-J.

    1980-08-01

    The aim of this work is to propose an original solution in order to implement the control operating system of a robot designed to travel between the main tank and the safety tank of the SUPERPHENIX reactor for scanning up the welding by ultrasound measurements. The system consists of: - a MITRA mini-computer programmed in APL, that manages the driving unit and defines the scanning strategy (visual unit, cheking board...). - a microprocessor that realizes the connection between the MITRA and the robot on wich the motor commands and the safety fonctions are placed. Such a solution allows to limit input output volume in the MITRA and gives the possibility to program the system in real time in APL language [fr

  15. Real-time face and gesture analysis for human-robot interaction

    Science.gov (United States)

    Wallhoff, Frank; Rehrl, Tobias; Mayer, Christoph; Radig, Bernd

    2010-05-01

    Human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, or gestures. Facial expressions and gestures are one of the main nonverbal communication mechanisms and pass large amounts of information between human dialog partners. Therefore, to allow for intuitive human-machine interaction, a real-time capable processing and recognition of facial expressions, hand and head gestures are of great importance. We present a system that is tackling these challenges. The input features for the dynamic head gestures and facial expressions are obtained from a sophisticated three-dimensional model, which is fitted to the user in a real-time capable manner. Applying this model different kinds of information are extracted from the image data and afterwards handed over to a real-time capable data-transferring framework, the so-called Real-Time DataBase (RTDB). In addition to the head and facial-related features, also low-level image features regarding the human hand - optical flow, Hu-moments are stored into the RTDB for the evaluation process of hand gestures. In general, the input of a single camera is sufficient for the parallel evaluation of the different gestures and facial expressions. The real-time capable recognition of the dynamic hand and head gestures are performed via different Hidden Markov Models, which have proven to be a quick and real-time capable classification method. On the other hand, for the facial expressions classical decision trees or more sophisticated support vector machines are used for the classification process. These obtained results of the classification processes are again handed over to the RTDB, where other processes (like a Dialog Management Unit) can easily access them without any blocking effects. In addition, an adjustable amount of history can be stored by the RTDB buffer unit.

  16. Real-time, wide-area hyperspectral imaging sensors for standoff detection of explosives and chemical warfare agents

    Science.gov (United States)

    Gomer, Nathaniel R.; Tazik, Shawna; Gardner, Charles W.; Nelson, Matthew P.

    2017-05-01

    Hyperspectral imaging (HSI) is a valuable tool for the detection and analysis of targets located within complex backgrounds. HSI can detect threat materials on environmental surfaces, where the concentration of the target of interest is often very low and is typically found within complex scenery. Unfortunately, current generation HSI systems have size, weight, and power limitations that prohibit their use for field-portable and/or real-time applications. Current generation systems commonly provide an inefficient area search rate, require close proximity to the target for screening, and/or are not capable of making real-time measurements. ChemImage Sensor Systems (CISS) is developing a variety of real-time, wide-field hyperspectral imaging systems that utilize shortwave infrared (SWIR) absorption and Raman spectroscopy. SWIR HSI sensors provide wide-area imagery with at or near real time detection speeds. Raman HSI sensors are being developed to overcome two obstacles present in standard Raman detection systems: slow area search rate (due to small laser spot sizes) and lack of eye-safety. SWIR HSI sensors have been integrated into mobile, robot based platforms and handheld variants for the detection of explosives and chemical warfare agents (CWAs). In addition, the fusion of these two technologies into a single system has shown the feasibility of using both techniques concurrently to provide higher probability of detection and lower false alarm rates. This paper will provide background on Raman and SWIR HSI, discuss the applications for these techniques, and provide an overview of novel CISS HSI sensors focusing on sensor design and detection results.

  17. A robotic C-arm cone beam CT system for image-guided proton therapy: design and performance.

    Science.gov (United States)

    Hua, Chiaho; Yao, Weiguang; Kidani, Takao; Tomida, Kazuo; Ozawa, Saori; Nishimura, Takenori; Fujisawa, Tatsuya; Shinagawa, Ryousuke; Merchant, Thomas E

    2017-11-01

    A ceiling-mounted robotic C-arm cone beam CT (CBCT) system was developed for use with a 190° proton gantry system and a 6-degree-of-freedom robotic patient positioner. We report on the mechanical design, system accuracy, image quality, image guidance accuracy, imaging dose, workflow, safety and collision-avoidance. The robotic CBCT system couples a rotating C-ring to the C-arm concentrically with a kV X-ray tube and a flat-panel imager mounted to the C-ring. CBCT images are acquired with flex correction and maximally 360° rotation for a 53 cm field of view. The system was designed for clinical use with three imaging locations. Anthropomorphic phantoms were imaged to evaluate the image guidance accuracy. The position accuracy and repeatability of the robotic C-arm was high (robotic CBCT system provides high-accuracy volumetric image guidance for proton therapy. Advances in knowledge: Ceiling-mounted robotic CBCT provides a viable option than CT on-rails for partial gantry and fixed-beam proton systems with the added advantage of acquiring images at the treatment isocentre.

  18. A Multi-ASIC Real-Time Implementation of the Two Dimensional Affine Transform with a Bilinear Interpolation Scheme

    NARCIS (Netherlands)

    Bentum, Marinus Jan; Samsom, M.M.; Samsom, Martin M.; Slump, Cornelis H.

    1995-01-01

    Some image processing applications (e.g. computer graphics and robot vision) require the rotation, scaling and translation of digitized images in real-time (25–30 images per second). Today's standard image processors can not meet this timing constraint so other solutions have to be considered. This

  19. Real-time maneuver optimization of space-based robots in a dynamic environment: Theory and on-orbit experiments

    Science.gov (United States)

    Chamitoff, Gregory E.; Saenz-Otero, Alvar; Katz, Jacob G.; Ulrich, Steve; Morrell, Benjamin J.; Gibbens, Peter W.

    2018-01-01

    This paper presents the development of a real-time path-planning optimization approach to controlling the motion of space-based robots. The algorithm is capable of planning three dimensional trajectories for a robot to navigate within complex surroundings that include numerous static and dynamic obstacles, path constraints and performance limitations. The methodology employs a unique transformation that enables rapid generation of feasible solutions for complex geometries, making it suitable for application to real-time operations and dynamic environments. This strategy was implemented on the Synchronized Position Hold Engage Reorient Experimental Satellite (SPHERES) test-bed on the International Space Station (ISS), and experimental testing was conducted onboard the ISS during Expedition 17 by the first author. Lessons learned from the on-orbit tests were used to further refine the algorithm for future implementations.

  20. A Filtering Approach for Image-Guided Surgery With a Highly Articulated Surgical Snake Robot.

    Science.gov (United States)

    Tully, Stephen; Choset, Howie

    2016-02-01

    The objective of this paper is to introduce a probabilistic filtering approach to estimate the pose and internal shape of a highly flexible surgical snake robot during minimally invasive surgery. Our approach renders a depiction of the robot that is registered to preoperatively reconstructed organ models to produce a 3-D visualization that can be used for surgical feedback. Our filtering method estimates the robot shape using an extended Kalman filter that fuses magnetic tracker data with kinematic models that define the motion of the robot. Using Lie derivative analysis, we show that this estimation problem is observable, and thus, the shape and configuration of the robot can be successfully recovered with a sufficient number of magnetic tracker measurements. We validate this study with benchtop and in-vivo image-guidance experiments in which the surgical robot was driven along the epicardial surface of a porcine heart. This paper introduces a filtering approach for shape estimation that can be used for image guidance during minimally invasive surgery. The methods being introduced in this paper enable informative image guidance for highly articulated surgical robots, which benefits the advancement of robotic surgery.

  1. Image-guided macular laser therapy: design considerations and progress toward implementation

    Science.gov (United States)

    Berger, Jeffrey W.; Shin, David S.

    1999-06-01

    Laser therapy is currently the only treatment of proven benefit for exudative age related macular degeneration and diabetic retinopathy. To guide treatment for macular diseases, investigations were initiated to permit overlay of previously-stored angiographic images and image sequences superimposed onto the real-time biomicroscopic fundus image. Prior to treatment, a set of partially overlapping fundus images is acquired and montaged in order to provide a map for subsequent tracking operations. A binocular slit-lamp biomicroscope interfaced to a CCD camera, framegrabber board, and PC permits acquisition and rendering of retinal images. Computer-vision algorithms facilitate robust tracking, registration, and near-video-rate image overlay of previously-stored retinal photographic and angiographic images onto the real-time fundus image. Laser treatment is guided in this augmented reality environment where the borders of the treatment target--for example, the boundaries of a choroidal neovascularization complex--are easily identified through overlay of angiographic information superimposed on, and registered with, the real-time fundus image. During periods of misregistration as judged by the amplitude of the tracking similarity metric, laser function is disabled, affording additional safety. Image-guided macular laser therapy should facilitate accurate targeting of treatable lesions and less unintentional retinal injury when compared with standard techniques.

  2. Real-time particle image velocimetry based on FPGA technology

    International Nuclear Information System (INIS)

    Iriarte Munoz, Jose Miguel

    2008-01-01

    Particle image velocimetry (PIV), based on laser sheet, is a method for image processing and calculation of distributed velocity fields.It is well established as a fluid dynamics measurement tool, being applied to liquid, gases and multiphase flows.Images of particles are processed by means of computationally demanding algorithms, what makes its real-time implementation difficult.The most probable displacements are found applying two dimensional cross-correlation function. In this work, we detail how it is possible to achieve real-time visualization of PIV method by designing an adaptive embedded architecture based on FPGA technology.We show first results of a physical field of velocity calculated by this platform system in a real-time approach. [es

  3. TU-FG-BRB-11: Design and Evaluation of a Robotic C-Arm CBCT System for Image-Guided Proton Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Hua, C; Yao, W; Farr, J; Merchant, T [St. Jude Children’s Research Hospital, Memphis, TN (United States); Kidani, T; Tomida, K; Ozawa, S; Nishimura, T; Fujusawa, T; Shinagawa, R [Hitachi, Ltd., Hitachi-shi, Ibaraki-ken (Japan)

    2016-06-15

    Purpose: To describe the design and performance of a ceiling-mounted robotic C-arm CBCT system for image-guided proton therapy. Methods: Uniquely different from traditional C-arm CBCT used in interventional radiology, the imaging system was designed to provide volumetric image guidance for patients treated on a 190-degree proton gantry system and a 6 degree-of-freedom (DOF) robotic patient positioner. The mounting of robotic arms to the ceiling rails, rather than gantry or nozzle, provides the flexibility in imaging locations (isocenter, iso+27cm in X, iso+100cm in Y) in the room and easier upgrade as technology advances. A kV X-ray tube and a 43×43cm flat panel imager were mounted to a rotating C-ring (87cm diameter), which is coupled to the C-arm concentrically. Both C-arm and the robotic arm remain stationary during imaging to maintain high position accuracy. Source-to-axis distance and source-to-imager distance are 100 and 150cm, respectively. A 14:1 focused anti-scatter grid and a bowtie filer are used for image acquisition. A unique automatic collimator device of 4 independent blades for adjusting field of view and reducing patient dose has also been developed. Results: Sub-millimeter position accuracy and repeatability of the robotic C-arm were measured with a laser tracker. High quality CBCT images for positioning can be acquired with a weighted CTDI of 3.6mGy (head in 200° full fan mode: 100kV, 20mA, 20ms, 10fps)-8.7 mGy (pelvis in 360° half fan mode: 125kV, 42mA, 20ms, 10fps). Image guidance accuracy achieved <1mm (3D vector) with automatic 3D-3D registration for anthropomorphic head and pelvis phantoms. Since November 2015, 22 proton therapy patients have undergone daily CBCT imaging for 6 DOF positioning. Conclusion: Decoupled from gantry and nozzle, this CBCT system provides a unique solution for volumetric image guidance with half/partial proton gantry systems. We demonstrated that daily CBCT can be integrated into proton therapy for pre

  4. TU-FG-BRB-11: Design and Evaluation of a Robotic C-Arm CBCT System for Image-Guided Proton Therapy

    International Nuclear Information System (INIS)

    Hua, C; Yao, W; Farr, J; Merchant, T; Kidani, T; Tomida, K; Ozawa, S; Nishimura, T; Fujusawa, T; Shinagawa, R

    2016-01-01

    Purpose: To describe the design and performance of a ceiling-mounted robotic C-arm CBCT system for image-guided proton therapy. Methods: Uniquely different from traditional C-arm CBCT used in interventional radiology, the imaging system was designed to provide volumetric image guidance for patients treated on a 190-degree proton gantry system and a 6 degree-of-freedom (DOF) robotic patient positioner. The mounting of robotic arms to the ceiling rails, rather than gantry or nozzle, provides the flexibility in imaging locations (isocenter, iso+27cm in X, iso+100cm in Y) in the room and easier upgrade as technology advances. A kV X-ray tube and a 43×43cm flat panel imager were mounted to a rotating C-ring (87cm diameter), which is coupled to the C-arm concentrically. Both C-arm and the robotic arm remain stationary during imaging to maintain high position accuracy. Source-to-axis distance and source-to-imager distance are 100 and 150cm, respectively. A 14:1 focused anti-scatter grid and a bowtie filer are used for image acquisition. A unique automatic collimator device of 4 independent blades for adjusting field of view and reducing patient dose has also been developed. Results: Sub-millimeter position accuracy and repeatability of the robotic C-arm were measured with a laser tracker. High quality CBCT images for positioning can be acquired with a weighted CTDI of 3.6mGy (head in 200° full fan mode: 100kV, 20mA, 20ms, 10fps)-8.7 mGy (pelvis in 360° half fan mode: 125kV, 42mA, 20ms, 10fps). Image guidance accuracy achieved <1mm (3D vector) with automatic 3D-3D registration for anthropomorphic head and pelvis phantoms. Since November 2015, 22 proton therapy patients have undergone daily CBCT imaging for 6 DOF positioning. Conclusion: Decoupled from gantry and nozzle, this CBCT system provides a unique solution for volumetric image guidance with half/partial proton gantry systems. We demonstrated that daily CBCT can be integrated into proton therapy for pre

  5. Keyframes Global Map Establishing Method for Robot Localization through Content-Based Image Matching

    Directory of Open Access Journals (Sweden)

    Tianyang Cao

    2017-01-01

    Full Text Available Self-localization and mapping are important for indoor mobile robot. We report a robust algorithm for map building and subsequent localization especially suited for indoor floor-cleaning robots. Common methods, for example, SLAM, can easily be kidnapped by colliding or disturbed by similar objects. Therefore, keyframes global map establishing method for robot localization in multiple rooms and corridors is needed. Content-based image matching is the core of this method. It is designed for the situation, by establishing keyframes containing both floor and distorted wall images. Image distortion, caused by robot view angle and movement, is analyzed and deduced. And an image matching solution is presented, consisting of extraction of overlap regions of keyframes extraction and overlap region rebuild through subblocks matching. For improving accuracy, ceiling points detecting and mismatching subblocks checking methods are incorporated. This matching method can process environment video effectively. In experiments, less than 5% frames are extracted as keyframes to build global map, which have large space distance and overlap each other. Through this method, robot can localize itself by matching its real-time vision frames with our keyframes map. Even with many similar objects/background in the environment or kidnapping robot, robot localization is achieved with position RMSE <0.5 m.

  6. Combined kV and MV imaging for real-time tracking of implanted fiducial markers

    International Nuclear Information System (INIS)

    Wiersma, R. D.; Mao Weihua; Xing, L.

    2008-01-01

    In the presence of intrafraction organ motion, target localization uncertainty can greatly hamper the advantage of highly conformal dose techniques such as intensity modulated radiation therapy (IMRT). To minimize the adverse dosimetric effect caused by tumor motion, a real-time knowledge of the tumor position is required throughout the beam delivery process. The recent integration of onboard kV diagnostic imaging together with MV electronic portal imaging devices on linear accelerators can allow for real-time three-dimensional (3D) tumor position monitoring during a treatment delivery. The aim of this study is to demonstrate a near real-time 3D internal fiducial tracking system based on the combined use of kV and MV imaging. A commercially available radiotherapy system equipped with both kV and MV imaging systems was used in this work. A hardware video frame grabber was used to capture both kV and MV video streams simultaneously through independent video channels at 30 frames per second. The fiducial locations were extracted from the kV and MV images using a software tool. The geometric tracking capabilities of the system were evaluated using a pelvic phantom with embedded fiducials placed on a moveable stage. The maximum tracking speed of the kV/MV system is approximately 9 Hz, which is primarily limited by the frame rate of the MV imager. The geometric accuracy of the system is found to be on the order of less than 1 mm in all three spatial dimensions. The technique requires minimal hardware modification and is potentially useful for image-guided radiation therapy systems

  7. An ultra-high field strength MR image-guided robotic needle delivery system for in-bore small animal interventions.

    Science.gov (United States)

    Gravett, Matthew; Cepek, Jeremy; Fenster, Aaron

    2017-11-01

    The purpose of this study was to develop and validate an image-guided robotic needle delivery system for accurate and repeatable needle targeting procedures in mouse brains inside the 12 cm inner diameter gradient coil insert of a 9.4 T MR scanner. Many preclinical research techniques require the use of accurate needle deliveries to soft tissues, including brain tissue. Soft tissues are optimally visualized in MR images, which offer high-soft tissue contrast, as well as a range of unique imaging techniques, including functional, spectroscopy and thermal imaging, however, there are currently no solutions for delivering needles to small animal brains inside the bore of an ultra-high field MR scanner. This paper describes the mechatronic design, evaluation of MR compatibility, registration technique, mechanical calibration, the quantitative validation of the in-bore image-guided needle targeting accuracy and repeatability, and demonstrated the system's ability to deliver needles in situ. Our six degree-of-freedom, MR compatible, mechatronic system was designed to fit inside the bore of a 9.4 T MR scanner and is actuated using a combination of piezoelectric and hydraulic mechanisms. The MR compatibility and targeting accuracy of the needle delivery system are evaluated to ensure that the system is precisely calibrated to perform the needle targeting procedures. A semi-automated image registration is performed to link the robot coordinates to the MR coordinate system. Soft tissue targets can be accurately localized in MR images, followed by automatic alignment of the needle trajectory to the target. Intra-procedure visualization of the needle target location and the needle were confirmed through MR images after needle insertion. The effects of geometric distortions and signal noise were found to be below threshold that would have an impact on the accuracy of the system. The system was found to have negligible effect on the MR image signal noise and geometric distortion

  8. Study on real-time force feedback for a master-slave interventional surgical robotic system.

    Science.gov (United States)

    Guo, Shuxiang; Wang, Yuan; Xiao, Nan; Li, Youxiang; Jiang, Yuhua

    2018-04-13

    In robot-assisted catheterization, haptic feedback is important, but is currently lacking. In addition, conventional interventional surgical robotic systems typically employ a master-slave architecture with an open-loop force feedback, which results in inaccurate control. We develop herein a novel real-time master-slave (RTMS) interventional surgical robotic system with a closed-loop force feedback that allows a surgeon to sense the true force during remote operation, provide adequate haptic feedback, and improve control accuracy in robot-assisted catheterization. As part of this system, we also design a unique master control handle that measures the true force felt by a surgeon, providing the basis for the closed-loop control of the entire system. We use theoretical and empirical methods to demonstrate that the proposed RTMS system provides a surgeon (using the master control handle) with a more accurate and realistic force sensation, which subsequently improves the precision of the master-slave manipulation. The experimental results show a substantial increase in the control accuracy of the force feedback and an increase in operational efficiency during surgery.

  9. A fuzzy logic based navigation for mobile robot

    International Nuclear Information System (INIS)

    Adel Ali S Al-Jumaily; Shamsudin M Amin; Mohamed Khalil

    1998-01-01

    The main issue of intelligent robot is how to reach its goal safely in real time when it moves in unknown environment. The navigational planning is becoming the central issue in development of real-time autonomous mobile robots. Behaviour based robots have been successful in reacting with dynamic environment but still there are some complexity and challenging problems. Fuzzy based behaviours present as powerful method to solve the real time reactive navigation problems in unknown environment. We shall classify the navigation generation methods, five some characteristics of these methods, explain why fuzzy logic is suitable for the navigation of mobile robot and automated guided vehicle, and describe a reactive navigation that is flexible to react through their behaviours to the change of the environment. Some simulation results will be presented to show the navigation of the robot. (Author)

  10. Real-time Avatar Animation from a Single Image.

    Science.gov (United States)

    Saragih, Jason M; Lucey, Simon; Cohn, Jeffrey F

    2011-01-01

    A real time facial puppetry system is presented. Compared with existing systems, the proposed method requires no special hardware, runs in real time (23 frames-per-second), and requires only a single image of the avatar and user. The user's facial expression is captured through a real-time 3D non-rigid tracking system. Expression transfer is achieved by combining a generic expression model with synthetically generated examples that better capture person specific characteristics. Performance of the system is evaluated on avatars of real people as well as masks and cartoon characters.

  11. SU-G-BRA-09: Estimation of Motion Tracking Uncertainty for Real-Time Adaptive Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Yan, H [Capital Medical University, Beijing, Beijing (China); Chen, Z [Yale New Haven Hospital, New Haven, CT (United States); Nath, R; Liu, W [Yale University School of Medicine, New Haven, CT (United States)

    2016-06-15

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertainty through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the

  12. SU-G-BRA-09: Estimation of Motion Tracking Uncertainty for Real-Time Adaptive Imaging

    International Nuclear Information System (INIS)

    Yan, H; Chen, Z; Nath, R; Liu, W

    2016-01-01

    Purpose: kV fluoroscopic imaging combined with MV treatment beam imaging has been investigated for intrafractional motion monitoring and correction. It is, however, subject to additional kV imaging dose to normal tissue. To balance tracking accuracy and imaging dose, we previously proposed an adaptive imaging strategy to dynamically decide future imaging type and moments based on motion tracking uncertainty. kV imaging may be used continuously for maximal accuracy or only when the position uncertainty (probability of out of threshold) is high if a preset imaging dose limit is considered. In this work, we propose more accurate methods to estimate tracking uncertainty through analyzing acquired data in real-time. Methods: We simulated motion tracking process based on a previously developed imaging framework (MV + initial seconds of kV imaging) using real-time breathing data from 42 patients. Motion tracking errors for each time point were collected together with the time point’s corresponding features, such as tumor motion speed and 2D tracking error of previous time points, etc. We tested three methods for error uncertainty estimation based on the features: conditional probability distribution, logistic regression modeling, and support vector machine (SVM) classification to detect errors exceeding a threshold. Results: For conditional probability distribution, polynomial regressions on three features (previous tracking error, prediction quality, and cosine of the angle between the trajectory and the treatment beam) showed strong correlation with the variation (uncertainty) of the mean 3D tracking error and its standard deviation: R-square = 0.94 and 0.90, respectively. The logistic regression and SVM classification successfully identified about 95% of tracking errors exceeding 2.5mm threshold. Conclusion: The proposed methods can reliably estimate the motion tracking uncertainty in real-time, which can be used to guide adaptive additional imaging to confirm the

  13. Pseudo real-time imaging systems with nonredundant pinhole arrays

    International Nuclear Information System (INIS)

    Han, K.S.; Berzins, G.J.; Roach, W.H.

    1976-01-01

    Coded aperture techniques, because of their efficiency and three-dimensional information content, represent potentially powerful tools for LMFBR safety experiment diagnostics. These techniques should be even more powerful if the data can be interpreted in real time or in pseudo real time. For example, to satisfy the stated goals for LMFBR diagnostics (1-ms time resolution and 1-mm spatial resolution), it is conceivable that several hundred frames of coded data would be recorded. To unscramble all of this information into reconstructed images could be a laborious, time-consuming task. A way to circumvent the tedium is with the use of the described hybrid digital/analog real-time imaging system. Some intermediate results are described briefly

  14. Mobile instrumentation platform and robotic accessory for real-time screening of hazardous waste

    International Nuclear Information System (INIS)

    Anderson, M.S.; Jaselskis, E.J.

    1992-01-01

    An innovative mobile laboratory for real-time field screening of soils for inorganic hazardous waste using laser ablation-inductively coupled plasma-atomic emission spectrometry sampling and analysis technique is being developed at Ames Laboratory. This sampling technique as well as the concept for installing, monitoring, and controlling the instrumentation and utilities in the mobile laboratory, the robotic sampling accessory, and manual sampling method are discussed. Benefits of this mobile configuration and future development plans also are described

  15. SU-D-BRF-06: A Brachytherapy Simulator with Realistic Haptic Force Feedback and Real-Time Ultrasounds Image Simulation for Training and Teaching

    International Nuclear Information System (INIS)

    Beaulieu, L; Carette, A; Comtois, S; Lavigueur, M; Cardou, P; Laurendeau, D

    2014-01-01

    Purpose: Surgical procedures require dexterity, expertise and repetition to reach optimal patient outcomes. However, efficient training opportunities are usually limited. This work presents a simulator system with realistic haptic force-feedback and full, real-time ultrasounds image simulation. Methods: The simulator is composed of a custom-made Linear-DELTA force-feedback robotic platform. The needle tip is mounted on a force gauge at the end effector of the robot, which responds to needle insertion by providing reaction forces. 3D geometry of the tissue is using a tetrahedral finite element mesh (FEM) mimicking tissue properties. As the needle is inserted/retracted, tissue deformation is computed using a mass-tensor nonlinear visco-elastic FEM. The real-time deformation is fed to the L-DELTA to take into account the force imparted to the needle, providing feedback to the end-user when crossing tissue boundaries or needle bending. Real-time 2D US image is also generated synchronously showing anatomy, needle insertion and tissue deformation. The simulator is running on an Intel I7 6- core CPU at 3.26 MHz. 3D tissue rendering and ultrasound display are performed on a Windows 7 computer; the FEM computation and L-DELTA control are executed on a similar PC using the Neutrino real-time OS. Both machines communicate through an Ethernet link. Results: The system runs at 500 Hz for a 8333-tetrahedron tissue mesh and a 100-node angular spring needle model. This frame rate ensures a relatively smooth displacement of the needle when pushed or retracted (±20 N in all directions at speeds of up to 2 m/s). Unlike commercially-available haptic platforms, the oblong workspace of the L-DELTA robot complies with that required for brachytherapy needle displacements of 0.1m by 0.1m by 0.25m. Conclusion: We have demonstrated a real-life, realistic brachytherapy simulator developed for prostate implants (LDR/HDR). The platform could be adapted to other sites or training for other

  16. Real-time interpolation for true 3-dimensional ultrasound image volumes.

    Science.gov (United States)

    Ji, Songbai; Roberts, David W; Hartov, Alex; Paulsen, Keith D

    2011-02-01

    We compared trilinear interpolation to voxel nearest neighbor and distance-weighted algorithms for fast and accurate processing of true 3-dimensional ultrasound (3DUS) image volumes. In this study, the computational efficiency and interpolation accuracy of the 3 methods were compared on the basis of a simulated 3DUS image volume, 34 clinical 3DUS image volumes from 5 patients, and 2 experimental phantom image volumes. We show that trilinear interpolation improves interpolation accuracy over both the voxel nearest neighbor and distance-weighted algorithms yet achieves real-time computational performance that is comparable to the voxel nearest neighbor algrorithm (1-2 orders of magnitude faster than the distance-weighted algorithm) as well as the fastest pixel-based algorithms for processing tracked 2-dimensional ultrasound images (0.035 seconds per 2-dimesional cross-sectional image [76,800 pixels interpolated, or 0.46 ms/1000 pixels] and 1.05 seconds per full volume with a 1-mm(3) voxel size [4.6 million voxels interpolated, or 0.23 ms/1000 voxels]). On the basis of these results, trilinear interpolation is recommended as a fast and accurate interpolation method for rectilinear sampling of 3DUS image acquisitions, which is required to facilitate subsequent processing and display during operating room procedures such as image-guided neurosurgery.

  17. A fast position estimation method for a control rod guide tube inspection robot with a single camera

    International Nuclear Information System (INIS)

    Lee, Jae C.; Seop, Jun H.; Choi, Yu R.; Kim, Jae H.

    2004-01-01

    One of the problems in the inspection of control rod guide tubes using a mobile robot is accurate estimation of the robot's position. The problem is usually explained by the question 'Where am I?'. We can solve this question by a method called dead reckoning using odometers. But it has some inherent drawbacks such that the position error grows without bound unless an independent reference is used periodically to reduce the errors. In this paper, we presented one method to overcome this drawback by using a vision sensor. Our method is based on the classical Lucas Kanade algorithm for on image tracking. In this algorithm, an optical flow must be calculated at every image frame, thus it has intensive computing load. In order to handle large motions, it is preferable to use a large integration window. But a small integration window is more preferable to keep the details contained in the images. We used the robot's movement information obtained from the dead reckoning as an input parameter for the feature tracking algorithm in order to restrict the position of an integration window. By means of this method, we could reduce the size of an integration window without any loss of its ability to handle large motions and could avoid the trade off in the accuracy. And we could estimate the position of our robot relatively fast without on intensive computing time and the inherent drawbacks mentioned above. We studied this algorithm for applying it to the control rod guide tubes inspection robot and tried an inspection without on operator's intervention

  18. SU-G-JeP3-03: Effect of Robot Pose On Beam Blocking for Ultrasound Guided SBRT of the Prostate

    Energy Technology Data Exchange (ETDEWEB)

    Gerlach, S; Schlaefer, A [Hamburg University of Technology, Hamburg (Germany); Kuhlemann, I; Ernst, F [Universitaet zu Luebeck, Luebeck (Germany); Fuerweger, C [European Cyberknife Center Munich, Munich (Germany)

    2016-06-15

    Purpose: Ultrasound presents a fast, volumetric image modality for real-time tracking of abdominal organ motion. How-ever, ultrasound transducer placement during radiation therapy is challenging. Recently, approaches using robotic arms for intra-treatment ultrasound imaging have been proposed. Good and reliable imaging requires placing the transducer close to the PTV. We studied the effect of a seven degrees of freedom robot on the fea-sible beam directions. Methods: For five CyberKnife prostate treatment plans we established viewports for the transducer, i.e., points on the patient surface with a soft tissue view towards the PTV. Choosing a feasible transducer pose and using the kinematic redundancy of the KUKA LBR iiwa robot, we considered three robot poses. Poses 1 to 3 had the elbow point anterior, superior, and inferior, respectively. For each pose and each beam starting point, the pro-jections of robot and PTV were computed. We added a 20 mm margin accounting for organ / beam motion. The number of nodes for which the PTV was partially of fully blocked were established. Moreover, the cumula-tive overlap for each of the poses and the minimum overlap over all poses were computed. Results: The fully and partially blocked nodes ranged from 12% to 20% and 13% to 27%, respectively. Typically, pose 3 caused the fewest blocked nodes. The cumulative overlap ranged from 19% to 29%. Taking the minimum overlap, i.e., considering moving the robot’s elbow while maintaining the transducer pose, the cumulative over-lap was reduced to 16% to 18% and was 3% to 6% lower than for the best individual pose. Conclusion: Our results indicate that it is possible to identify feasible ultrasound transducer poses and to use the kinematic redundancy of a 7 DOF robot to minimize the impact of the imaging subsystem on the feasible beam directions for ultrasound guided and motion compensated SBRT. Research partially funded by DFG grants ER 817/1-1 and SCHL 1844/3-1.

  19. SU-G-JeP3-03: Effect of Robot Pose On Beam Blocking for Ultrasound Guided SBRT of the Prostate

    International Nuclear Information System (INIS)

    Gerlach, S; Schlaefer, A; Kuhlemann, I; Ernst, F; Fuerweger, C

    2016-01-01

    Purpose: Ultrasound presents a fast, volumetric image modality for real-time tracking of abdominal organ motion. How-ever, ultrasound transducer placement during radiation therapy is challenging. Recently, approaches using robotic arms for intra-treatment ultrasound imaging have been proposed. Good and reliable imaging requires placing the transducer close to the PTV. We studied the effect of a seven degrees of freedom robot on the fea-sible beam directions. Methods: For five CyberKnife prostate treatment plans we established viewports for the transducer, i.e., points on the patient surface with a soft tissue view towards the PTV. Choosing a feasible transducer pose and using the kinematic redundancy of the KUKA LBR iiwa robot, we considered three robot poses. Poses 1 to 3 had the elbow point anterior, superior, and inferior, respectively. For each pose and each beam starting point, the pro-jections of robot and PTV were computed. We added a 20 mm margin accounting for organ / beam motion. The number of nodes for which the PTV was partially of fully blocked were established. Moreover, the cumula-tive overlap for each of the poses and the minimum overlap over all poses were computed. Results: The fully and partially blocked nodes ranged from 12% to 20% and 13% to 27%, respectively. Typically, pose 3 caused the fewest blocked nodes. The cumulative overlap ranged from 19% to 29%. Taking the minimum overlap, i.e., considering moving the robot’s elbow while maintaining the transducer pose, the cumulative over-lap was reduced to 16% to 18% and was 3% to 6% lower than for the best individual pose. Conclusion: Our results indicate that it is possible to identify feasible ultrasound transducer poses and to use the kinematic redundancy of a 7 DOF robot to minimize the impact of the imaging subsystem on the feasible beam directions for ultrasound guided and motion compensated SBRT. Research partially funded by DFG grants ER 817/1-1 and SCHL 1844/3-1.

  20. Unix Philosophy and the Real World: Control Software for Humanoid Robots

    Directory of Open Access Journals (Sweden)

    Neil Thomas Dantam

    2016-03-01

    Full Text Available Robot software combines the challenges of general purpose and real-time software, requiring complex logic and bounded resource use. Physical safety, particularly for dynamic systems such as humanoid robots, depends on correct software. General purpose computation has converged on unix-like operating systems -- standardized as POSIX, the Portable Operating System Interface -- for devices from cellular phones to supercomputers. The modular, multi-process design typical of POSIX applications is effective for building complex and reliable software. Absent from POSIX, however, is an interproccess communication mechanism that prioritizes newer data as typically desired for control of physical systems. We address this need in the Ach communication library which provides suitable semantics and performance for real-time robot control. Although initially designed for humanoid robots, Ach has broader applicability to complex mechatronic devices -- humanoid and otherwise -- that require real-time coupling of sensors, control, planning, and actuation. The initial user space implementation of Ach was limited in the ability to receive data from multiple sources. We remove this limitation by implementing Ach as a Linux kernel module, enabling Ach's high-performance and latest-message-favored semantics within conventional POSIX communication pipelines. We discuss how these POSIX interfaces and design principles apply to robot software, and we present a case study using the Ach kernel module for communication on the Baxter robot.

  1. A real-time computational model for estimating kinematics of ankle ligaments.

    Science.gov (United States)

    Zhang, Mingming; Davies, T Claire; Zhang, Yanxin; Xie, Sheng Quan

    2016-01-01

    An accurate assessment of ankle ligament kinematics is crucial in understanding the injury mechanisms and can help to improve the treatment of an injured ankle, especially when used in conjunction with robot-assisted therapy. A number of computational models have been developed and validated for assessing the kinematics of ankle ligaments. However, few of them can do real-time assessment to allow for an input into robotic rehabilitation programs. An ankle computational model was proposed and validated to quantify the kinematics of ankle ligaments as the foot moves in real-time. This model consists of three bone segments with three rotational degrees of freedom (DOFs) and 12 ankle ligaments. This model uses inputs for three position variables that can be measured from sensors in many ankle robotic devices that detect postures within the foot-ankle environment and outputs the kinematics of ankle ligaments. Validation of this model in terms of ligament length and strain was conducted by comparing it with published data on cadaver anatomy and magnetic resonance imaging. The model based on ligament lengths and strains is in concurrence with those from the published studies but is sensitive to ligament attachment positions. This ankle computational model has the potential to be used in robot-assisted therapy for real-time assessment of ligament kinematics. The results provide information regarding the quantification of kinematics associated with ankle ligaments related to the disability level and can be used for optimizing the robotic training trajectory.

  2. Development of a real time imaging-based guidance system of magnetic nanoparticles for targeted drug delivery

    International Nuclear Information System (INIS)

    Zhang, Xingming; Le, Tuan-Anh; Yoon, Jungwon

    2017-01-01

    Targeted drug delivery using magnetic nanoparticles is an efficient technique as molecules can be directed toward specific tissues inside a human body. For the first time, we implemented a real-time imaging-based guidance system of nanoparticles using untethered electro-magnetic devices for simultaneous guiding and tracking. In this paper a low-amplitude-excitation-field magnetic particle imaging (MPI) is introduced. Based on this imaging technology, a hybrid system comprised of an electromagnetic actuator and MPI was used to navigate nanoparticles in a non-invasive way. The real-time low-amplitude-excitation-field MPI and electromagnetic actuator of this navigation system are achieved by applying a time-division multiplexing scheme to the coil topology. A one dimensional nanoparticle navigation system was built to demonstrate the feasibility of the proposed approach and it could achieve a 2 Hz navigation update rate with the field gradient of 3.5 T/m during the imaging mode and 8.75 T/m during the actuation mode. Particles with both 90 nm and 5 nm diameters could be successfully manipulated and monitored in a tube through the proposed system, which can significantly enhance targeting efficiency and allow precise analysis in a real drug delivery. - Highlights: • A real-time system comprised of an electromagnetic actuator and a low-amplitude-excitation-field MPI can navigate magnetic nanoparticles. • The imaging scheme is feasible to enlarge field of view size. • The proposed navigation system can be cost efficient, compact, and optimized for targeting of the nanoparticles.

  3. Development of a real time imaging-based guidance system of magnetic nanoparticles for targeted drug delivery

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Xingming [School of Naval Architecture and Ocean Engineering, Harbin Institute of Technology at Weihai, Weihai, Shandong (China); School of Mechanical and Aerospace Engineering & ReCAPT, Gyeongsang National University, Jinju 660-701 (Korea, Republic of); Le, Tuan-Anh [School of Mechanical and Aerospace Engineering & ReCAPT, Gyeongsang National University, Jinju 660-701 (Korea, Republic of); Yoon, Jungwon, E-mail: jwyoon@gnu.ac.kr [School of Mechanical and Aerospace Engineering & ReCAPT, Gyeongsang National University, Jinju 660-701 (Korea, Republic of)

    2017-04-01

    Targeted drug delivery using magnetic nanoparticles is an efficient technique as molecules can be directed toward specific tissues inside a human body. For the first time, we implemented a real-time imaging-based guidance system of nanoparticles using untethered electro-magnetic devices for simultaneous guiding and tracking. In this paper a low-amplitude-excitation-field magnetic particle imaging (MPI) is introduced. Based on this imaging technology, a hybrid system comprised of an electromagnetic actuator and MPI was used to navigate nanoparticles in a non-invasive way. The real-time low-amplitude-excitation-field MPI and electromagnetic actuator of this navigation system are achieved by applying a time-division multiplexing scheme to the coil topology. A one dimensional nanoparticle navigation system was built to demonstrate the feasibility of the proposed approach and it could achieve a 2 Hz navigation update rate with the field gradient of 3.5 T/m during the imaging mode and 8.75 T/m during the actuation mode. Particles with both 90 nm and 5 nm diameters could be successfully manipulated and monitored in a tube through the proposed system, which can significantly enhance targeting efficiency and allow precise analysis in a real drug delivery. - Highlights: • A real-time system comprised of an electromagnetic actuator and a low-amplitude-excitation-field MPI can navigate magnetic nanoparticles. • The imaging scheme is feasible to enlarge field of view size. • The proposed navigation system can be cost efficient, compact, and optimized for targeting of the nanoparticles.

  4. Real-time 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy.

    Science.gov (United States)

    Furtado, Hugo; Steiner, Elisabeth; Stock, Markus; Georg, Dietmar; Birkfellner, Wolfgang

    2013-10-01

    Intra-fractional respiratory motion during radiotherapy leads to a larger planning target volume (PTV). Real-time tumor motion tracking by two-dimensional (2D)/3D registration using on-board kilo-voltage (kV) imaging can allow for a reduction of the PTV though motion along the imaging beam axis cannot be resolved using only one projection image. We present a retrospective patient study investigating the impact of paired portal mega-voltage (MV) and kV images on registration accuracy. Material and methods. We used data from 10 patients suffering from non-small cell lung cancer (NSCLC) undergoing stereotactic body radiation therapy (SBRT) lung treatment. For each patient we acquired a planning computed tomography (CT) and sequences of kV and MV images during treatment. We compared the accuracy of motion tracking in six degrees-of-freedom (DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. Results. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 2.9 mm to 1.5 mm and the motion along AP was successfully extracted. Mean registration time was 188 ms. Conclusion. Our evaluation shows that using kV-MV image pairs leads to improved motion extraction in six DOF and is suitable for real-time tumor motion tracking with a conventional LINAC.

  5. Interactive animated displayed of man-controlled and autonomous robots

    International Nuclear Information System (INIS)

    Crane, C.D. III; Duffy, J.

    1986-01-01

    An interactive computer graphics program has been developed which allows an operator to more readily control robot motions in two distinct modes; viz., man-controlled and autonomous. In man-controlled mode, the robot is guided by a joystick or similar device. As the robot moves, actual joint angle information is measured and supplied to a graphics system which accurately duplicates the robot motion. Obstacles are placed in the actual and animated workspace and the operator is warned of imminent collisions by sight and sound via the graphics system. Operation of the system in man-controlled mode is shown. In autonomous mode, a collision-free path between specified points is obtained by previewing robot motions on the graphics system. Once a satisfactory path is selected, the path characteristics are transmitted to the actual robot and the motion is executed. The telepresence system developed at the University of Florida has been successful in demonstrating that the concept of controlling a robot manipulator with the aid of an interactive computer graphics system is feasible and practical. The clarity of images coupled with real-time interaction and real-time determination of imminent collision with obstacles has resulted in improved operator performance. Furthermore, the ability for an operator to preview and supervise autonomous operations is a significant attribute when operating in a hazardous environment

  6. Automated Real-Time Needle-Guide Tracking for Fast 3-T MR-guided Transrectal Prostate Biopsy: A Feasibility Study

    NARCIS (Netherlands)

    Zamecnik, P.; Schouten, M.G.; Krafft, A.J.; Maier, F.; Schlemmer, H.-P.; Barentsz, J.O.; Bock, M. de; Futterer, J.J.

    2014-01-01

    Purpose To assess the feasibility of automatic needle-guide tracking by using a real-time phase-only cross correlation (POCC) algorithm-based sequence for transrectal 3-T in-bore magnetic resonance (MR)-guided prostate biopsies. Materials and Methods This study was approved by the ethics review

  7. Modeling of the bony pelvis from MRI using a multi-atlas AE-SDM for registration and tracking in image-guided robotic prostatectomy.

    Science.gov (United States)

    Gao, Qinquan; Chang, Ping-Lin; Rueckert, Daniel; Ali, S Mohammed; Cohen, Daniel; Pratt, Philip; Mayer, Erik; Yang, Guang-Zhong; Darzi, Ara; Edwards, Philip Eddie

    2013-03-01

    A fundamental challenge in the development of image-guided surgical systems is alignment of the preoperative model to the operative view of the patient. This is achieved by finding corresponding structures in the preoperative scans and on the live surgical scene. In robot-assisted laparoscopic prostatectomy (RALP), the most readily visible structure is the bone of the pelvic rim. Magnetic resonance imaging (MRI) is the modality of choice for prostate cancer detection and staging, but extraction of bone from MRI is difficult and very time consuming to achieve manually. We present a robust and fully automated multi-atlas pipeline for bony pelvis segmentation from MRI, using a MRI appearance embedding statistical deformation model (AE-SDM). The statistical deformation model is built using the node positions of deformations obtained from hierarchical registrations of full pelvis CT images. For datasets with corresponding CT and MRI images, we can transform the MRI into CT SDM space. MRI appearance can then be used to improve the combined MRI/CT atlas to MRI registration using SDM constraints. We can use this model to segment the bony pelvis in a new MRI image where there is no CT available. A multi-atlas segmentation algorithm is introduced which incorporates MRI AE-SDMs guidance. We evaluated the method on 19 subjects with corresponding MRI and manually segmented CT datasets by performing a leave-one-out study. Several metrics are used to quantify the overlap between the automatic and manual segmentations. Compared to the manual gold standard segmentations, our robust segmentation method produced an average surface distance 1.24±0.27mm, which outperforms state-of-the-art algorithms for MRI bony pelvis segmentation. We also show that the resulting surface can be tracked in the endoscopic view in near real time using dense visual tracking methods. Results are presented on a simulation and a real clinical RALP case. Tracking is accurate to 0.13mm over 700 frames

  8. Development of a Pneumatic Robot for MRI-guided Transperineal Prostate Biopsy and Brachytherapy: New Approaches

    OpenAIRE

    Song, Sang-Eun; Cho, Nathan B.; Fischer, Gregory; Hata, Nobuhito; Tempany, Clare; Fichtinger, Gabor; Iordachita, Iulian

    2010-01-01

    Magnetic Resonance Imaging (MRI) guided prostate biopsy and brachytherapy has been introduced in order to enhance the cancer detection and treatment. For the accurate needle positioning, a number of robotic assistants have been developed. However, problems exist due to the strong magnetic field and limited workspace. Pneumatically actuated robots have shown the minimum distraction in the environment but the confined workspace limits optimal robot design and thus controllability is often poor....

  9. Adaptive control of two-wheeled mobile balance robot capable to adapt different surfaces using a novel artificial neural network–based real-time switching dynamic controller

    Directory of Open Access Journals (Sweden)

    Ali Unluturk

    2017-03-01

    Full Text Available In this article, a novel real-time artificial neural network–based adaptable switching dynamic controller is developed and practically implemented. It will be used for real-time control of two-wheeled balance robot which can balance itself upright position on different surfaces. In order to examine the efficiency of the proposed controller, a two-wheeled mobile balance robot is designed and a test platform for experimental setup is made for balance problem on different surfaces. In a developed adaptive controller algorithm which is capable to adapt different surfaces, mean absolute target angle deviation error, mean absolute target displacement deviation error and mean absolute controller output data are employed for surface estimation by using artificial neural network. In a designed two-wheeled mobile balance robot system, robot tilt angle is estimated via Kalman filter from accelerometer and gyroscope sensor signals. Furthermore, a visual robot control interface is developed in C++ software development environment so that robot controller parameters can be changed as desired. In addition, robot balance angle, linear displacement and controller output can be observed online on personal computer. According to the real-time experimental results, the proposed novel type controller gives more effective results than the classic ones.

  10. Low-level processing for real-time image analysis

    Science.gov (United States)

    Eskenazi, R.; Wilf, J. M.

    1979-01-01

    A system that detects object outlines in television images in real time is described. A high-speed pipeline processor transforms the raw image into an edge map and a microprocessor, which is integrated into the system, clusters the edges, and represents them as chain codes. Image statistics, useful for higher level tasks such as pattern recognition, are computed by the microprocessor. Peak intensity and peak gradient values are extracted within a programmable window and are used for iris and focus control. The algorithms implemented in hardware and the pipeline processor architecture are described. The strategy for partitioning functions in the pipeline was chosen to make the implementation modular. The microprocessor interface allows flexible and adaptive control of the feature extraction process. The software algorithms for clustering edge segments, creating chain codes, and computing image statistics are also discussed. A strategy for real time image analysis that uses this system is given.

  11. Improved operative efficiency using a real-time MRI-guided stereotactic platform for laser amygdalohippocampotomy.

    Science.gov (United States)

    Ho, Allen L; Sussman, Eric S; Pendharkar, Arjun V; Le, Scheherazade; Mantovani, Alessandra; Keebaugh, Alaine C; Drover, David R; Grant, Gerald A; Wintermark, Max; Halpern, Casey H

    2018-04-01

    OBJECTIVE MR-guided laser interstitial thermal therapy (MRgLITT) is a minimally invasive method for thermal destruction of benign or malignant tissue that has been used for selective amygdalohippocampal ablation for the treatment of temporal lobe epilepsy. The authors report their initial experience adopting a real-time MRI-guided stereotactic platform that allows for completion of the entire procedure in the MRI suite. METHODS Between October 2014 and May 2016, 17 patients with mesial temporal sclerosis were selected by a multidisciplinary epilepsy board to undergo a selective amygdalohippocampal ablation for temporal lobe epilepsy using MRgLITT. The first 9 patients underwent standard laser ablation in 2 phases (operating room [OR] and MRI suite), whereas the next 8 patients underwent laser ablation entirely in the MRI suite with the ClearPoint platform. A checklist specific to the real-time MRI-guided laser amydalohippocampal ablation was developed and used for each case. For both cohorts, clinical and operative information, including average case times and accuracy data, was collected and analyzed. RESULTS There was a learning curve associated with using this real-time MRI-guided system. However, operative times decreased in a linear fashion, as did total anesthesia time. In fact, the total mean patient procedure time was less in the MRI cohort (362.8 ± 86.6 minutes) than in the OR cohort (456.9 ± 80.7 minutes). The mean anesthesia time was significantly shorter in the MRI cohort (327.2 ± 79.9 minutes) than in the OR cohort (435.8 ± 78.4 minutes, p = 0.02). CONCLUSIONS The real-time MRI platform for MRgLITT can be adopted in an expedient manner. Completion of MRgLITT entirely in the MRI suite may lead to significant advantages in procedural times.

  12. Recent trends in robot-assisted therapy environments to improve real-life functional performance after stroke.

    Science.gov (United States)

    Johnson, Michelle J

    2006-12-18

    Upper and lower limb robotic tools for neuro-rehabilitation are effective in reducing motor impairment but they are limited in their ability to improve real world function. There is a need to improve functional outcomes after robot-assisted therapy. Improvements in the effectiveness of these environments may be achieved by incorporating into their design and control strategies important elements key to inducing motor learning and cerebral plasticity such as mass-practice, feedback, task-engagement, and complex problem solving. This special issue presents nine articles. Novel strategies covered in this issue encourage more natural movements through the use of virtual reality and real objects and faster motor learning through the use of error feedback to guide acquisition of natural movements that are salient to real activities. In addition, several articles describe novel systems and techniques that use of custom and commercial games combined with new low-cost robot systems and a humanoid robot to embody the " supervisory presence" of the therapy as possible solutions to exercise compliance in under-supervised environments such as the home.

  13. Recent trends in robot-assisted therapy environments to improve real-life functional performance after stroke

    Directory of Open Access Journals (Sweden)

    Johnson Michelle J

    2006-12-01

    Full Text Available Abstract Upper and lower limb robotic tools for neuro-rehabilitation are effective in reducing motor impairment but they are limited in their ability to improve real world function. There is a need to improve functional outcomes after robot-assisted therapy. Improvements in the effectiveness of these environments may be achieved by incorporating into their design and control strategies important elements key to inducing motor learning and cerebral plasticity such as mass-practice, feedback, task-engagement, and complex problem solving. This special issue presents nine articles. Novel strategies covered in this issue encourage more natural movements through the use of virtual reality and real objects and faster motor learning through the use of error feedback to guide acquisition of natural movements that are salient to real activities. In addition, several articles describe novel systems and techniques that use of custom and commercial games combined with new low-cost robot systems and a humanoid robot to embody the " supervisory presence" of the therapy as possible solutions to exercise compliance in under-supervised environments such as the home.

  14. Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy

    Science.gov (United States)

    Furtado, H.; Steiner, E.; Stock, M.; Georg, D.; Birkfellner, W.

    2014-03-01

    Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190+/-35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.

  15. Experimental ultrasound system for real-time synthetic imaging

    DEFF Research Database (Denmark)

    Jensen, Jørgen Arendt; Holm, Ole; Jensen, Lars Joost

    1999-01-01

    Digital signal processing is being employed more and more in modern ultrasound scanners. This has made it possible to do dynamic receive focusing for each sample and implement other advanced imaging methods. The processing, however, has to be very fast and cost-effective at the same time. Dedicated...... for synthetic aperture imaging, 2D and 3D B-mode and velocity imaging. The system can be used with 128 element transducers and can excite 128 channels and receive and sample data from 64 channels simultaneously at 40 MHz with 12 bits precision. Data can be processed in real time using the system's 80 signal...... chips are used in order to do real time processing. This often makes it difficult to implement radically different imaging strategies on one platform and makes the scanners less accessible for research purposes. Here flexibility is the prime concern, and the storage of data from all transducer elements...

  16. REAL-TIME CAMERA GUIDANCE FOR 3D SCENE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    F. Schindler

    2012-07-01

    Full Text Available We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  17. Imaging gene expression in real-time using aptamers

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Il Chung [Iowa State Univ., Ames, IA (United States)

    2011-01-01

    Signal transduction pathways are usually activated by external stimuli and are transient. The downstream changes such as transcription of the activated genes are also transient. Real-time detection of promoter activity is useful for understanding changes in gene expression, especially during cell differentiation and in development. A simple and reliable method for viewing gene expression in real time is not yet available. Reporter proteins such as fluorescent proteins and luciferase allow for non-invasive detection of the products of gene expression in living cells. However, current reporter systems do not provide for real-time imaging of promoter activity in living cells. This is because of the long time period after transcription required for fluorescent protein synthesis and maturation. We have developed an RNA reporter system for imaging in real-time to detect changes in promoter activity as they occur. The RNA reporter uses strings of RNA aptamers that constitute IMAGEtags (Intracellular MultiAptamer GEnetic tags), which can be expressed from a promoter of choice. The tobramycin, neomycin and PDC RNA aptamers have been utilized for this system and expressed in yeast from the GAL1 promoter. The IMAGEtag RNA kinetics were quantified by RT-qPCR. In yeast precultured in raffinose containing media the GAL1 promoter responded faster than in yeast precultured in glucose containing media. IMAGEtag RNA has relatively short half-life (5.5 min) in yeast. For imaging, the yeast cells are incubated with their ligands that are labeled with fluorescent dyes. To increase signal to noise, ligands have been separately conjugated with the FRET (Förster resonance energy transfer) pairs, Cy3 and Cy5. With these constructs, the transcribed aptamers can be imaged after activation of the promoter by galactose. FRET was confirmed with three different approaches, which were sensitized emission, acceptor photobleaching and donor lifetime by FLIM (fluorescence lifetime imaging

  18. Imaging gene expression in real-time using aptamers

    Energy Technology Data Exchange (ETDEWEB)

    Shin, Ilchung [Iowa State Univ., Ames, IA (United States)

    2012-01-01

    Signal transduction pathways are usually activated by external stimuli and are transient. The downstream changes such as transcription of the activated genes are also transient. Real-time detection of promoter activity is useful for understanding changes in gene expression, especially during cell differentiation and in development. A simple and reliable method for viewing gene expression in real time is not yet available. Reporter proteins such as fluorescent proteins and luciferase allow for non-invasive detection of the products of gene expression in living cells. However, current reporter systems do not provide for real-time imaging of promoter activity in living cells. This is because of the long time period after transcription required for fluorescent protein synthesis and maturation. We have developed an RNA reporter system for imaging in real-time to detect changes in promoter activity as they occur. The RNA reporter uses strings of RNA aptamers that constitute IMAGEtags (Intracellular MultiAptamer GEnetic tags), which can be expressed from a promoter of choice. The tobramycin, neomycin and PDC RNA aptamers have been utilized for this system and expressed in yeast from the GAL1 promoter. The IMAGEtag RNA kinetics were quantified by RT-qPCR. In yeast precultured in raffinose containing media the GAL1 promoter responded faster than in yeast precultured in glucose containing media. IMAGEtag RNA has relatively short half-life (5.5 min) in yeast. For imaging, the yeast cells are incubated with their ligands that are labeled with fluorescent dyes. To increase signal to noise, ligands have been separately conjugated with the FRET (Förster resonance energy transfer) pairs, Cy3 and Cy5. With these constructs, the transcribed aptamers can be imaged after activation of the promoter by galactose. FRET was confirmed with three different approaches, which were sensitized emission, acceptor photobleaching and donor lifetime by FLIM (fluorescence lifetime imaging

  19. Visual Control of Robots Using Range Images

    Directory of Open Access Journals (Sweden)

    Fernando Torres

    2010-08-01

    Full Text Available In the last years, 3D-vision systems based on the time-of-flight (ToF principle have gained more importance in order to obtain 3D information from the workspace. In this paper, an analysis of the use of 3D ToF cameras to guide a robot arm is performed. To do so, an adaptive method to simultaneous visual servo control and camera calibration is presented. Using this method a robot arm is guided by using range information obtained from a ToF camera. Furthermore, the self-calibration method obtains the adequate integration time to be used by the range camera in order to precisely determine the depth information.

  20. Strategies of statistical windows in PET image reconstruction to improve the user’s real time experience

    Science.gov (United States)

    Moliner, L.; Correcher, C.; Gimenez-Alventosa, V.; Ilisie, V.; Alvarez, J.; Sanchez, S.; Rodríguez-Alvarez, M. J.

    2017-11-01

    Nowadays, with the increase of the computational power of modern computers together with the state-of-the-art reconstruction algorithms, it is possible to obtain Positron Emission Tomography (PET) images in practically real time. These facts open the door to new applications such as radio-pharmaceuticals tracking inside the body or the use of PET for image-guided procedures, such as biopsy interventions, among others. This work is a proof of concept that aims to improve the user experience with real time PET images. Fixed, incremental, overlapping, sliding and hybrid windows are the different statistical combinations of data blocks used to generate intermediate images in order to follow the path of the activity in the Field Of View (FOV). To evaluate these different combinations, a point source is placed in a dedicated breast PET device and moved along the FOV. These acquisitions are reconstructed according to the different statistical windows, resulting in a smoother transition of positions for the image reconstructions that use the sliding and hybrid window.

  1. In vivo reproducibility of robotic probe placement for an integrated US-CT image-guided radiation therapy system

    Science.gov (United States)

    Lediju Bell, Muyinatu A.; Sen, H. Tutkun; Iordachita, Iulian; Kazanzides, Peter; Wong, John

    2014-03-01

    Radiation therapy is used to treat cancer by delivering high-dose radiation to a pre-defined target volume. Ultrasound (US) has the potential to provide real-time, image-guidance of radiation therapy to identify when a target moves outside of the treatment volume (e.g. due to breathing), but the associated probe-induced tissue deformation causes local anatomical deviations from the treatment plan. If the US probe is placed to achieve similar tissue deformations in the CT images required for treatment planning, its presence causes streak artifacts that will interfere with treatment planning calculations. To overcome these challenges, we propose robot-assisted placement of a real ultrasound probe, followed by probe removal and replacement with a geometrically-identical, CT-compatible model probe. This work is the first to investigate in vivo deformation reproducibility with the proposed approach. A dog's prostate, liver, and pancreas were each implanted with three 2.38-mm spherical metallic markers, and the US probe was placed to visualize the implanted markers in each organ. The real and model probes were automatically removed and returned to the same position (i.e. position control), and CT images were acquired with each probe placement. The model probe was also removed and returned with the same normal force measured with the real US probe (i.e. force control). Marker positions in CT images were analyzed to determine reproducibility, and a corollary reproducibility study was performed on ex vivo tissue. In vivo results indicate that tissue deformations with the real probe were repeatable under position control for the prostate, liver, and pancreas, with median 3D reproducibility of 0.3 mm, 0.3 mm, and 1.6 mm, respectively, compared to 0.6 mm for the ex vivo tissue. For the prostate, the mean 3D tissue displacement errors between the real and model probes were 0.2 mm under position control and 0.6 mm under force control, which are both within acceptable

  2. Sensor based real-time control of robots

    DEFF Research Database (Denmark)

    Andersen, Thomas Timm

    in the sensor to actuation delays in the robot. To that end a method for measuring the actuation and response delay of an industrial robot manipulator, relative to the joint configuration of the robot, is presented. It is also shown how modern machine learning algorithms can be trained to build model based......As robots are becoming more and more widespread in manufacturing, the desire and need for more advanced robotic solutions are increasingly expressed. This is especially the case in Denmark where products with natural variances like agricultural products takes up a large share of the produced goods....... For such production lines, it is often not possible to use primitive preprogrammed industrial robots to handle the otherwise repetitive tasks due to the uniqueness of each product. To handle such products it is necessary to use sensors to determine the size, shape, and position of the product before a proper...

  3. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Uk [Samsung Electroics, Suwon (Korea, Republic of); Sun, Ju Young; Won, Mooncheol [Chungnam Nat' l Univ., Daejeon (Korea, Republic of)

    2013-12-15

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner.

  4. Real-Time Algorithm for Relative Position Estimation Between Person and Robot Using a Monocular Camera

    International Nuclear Information System (INIS)

    Lee, Jung Uk; Sun, Ju Young; Won, Mooncheol

    2013-01-01

    In this paper, we propose a real-time algorithm for estimating the relative position of a person with respect to a robot (camera) using a monocular camera. The algorithm detects the head and shoulder regions of a person using HOG (Histogram of Oriented Gradient) feature vectors and an SVM (Support Vector Machine) classifier. The size and location of the detected area are used for calculating the relative distance and angle between the person and the camera on a robot. To increase the speed of the algorithm, we use a GPU and NVIDIA's CUDA library; the resulting algorithm speed is ∼ 15 Hz. The accuracy of the algorithm is compared with the output of a SICK laser scanner

  5. Contribution to automatic image recognition applied to robot technology

    International Nuclear Information System (INIS)

    Juvin, Didier

    1983-01-01

    This paper describes a method for the analysis and interpretation of the images of objects located in a plain scene which is the environment of a robot. The first part covers the recovery of the contour of objects present in the image, and discusses a novel contour-following technique based on the line arborescence concept in combination with a 'cost function' giving a quantitative assessment of contour quality. We present heuristics for moderate-cost, minimum-time arborescence coverage, which is equivalent to following probable contour lines in the image. A contour segmentation technique, invariant in the translational and rotational modes, is presented next. The second part describes a recognition method based on the above invariant encoding: the algorithm performs a preliminary screening based on coarse data derived from segmentation, followed by a comparison of forms with probable identity through application of a distance specified in terms of the invariant encoding. The last part covers the outcome of the above investigations, which have found an industrial application in the vision system of a range of robots. The system is set up in a 16-bit microprocessor and operates in real time. (author) [fr

  6. An active robot vision system for real-time 3-D structure recovery

    Energy Technology Data Exchange (ETDEWEB)

    Juvin, D. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d`Electronique et d`Instrumentation Nucleaire; Boukir, S.; Chaumette, F.; Bouthemy, P. [Rennes-1 Univ., 35 (France)

    1993-10-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up.

  7. An active robot vision system for real-time 3-D structure recovery

    International Nuclear Information System (INIS)

    Juvin, D.

    1993-01-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up

  8. Vision-based real-time position control of a semi-automated system for robot-assisted joint fracture surgery.

    Science.gov (United States)

    Dagnino, Giulio; Georgilas, Ioannis; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja

    2016-03-01

    Joint fracture surgery quality can be improved by robotic system with high-accuracy and high-repeatability fracture fragment manipulation. A new real-time vision-based system for fragment manipulation during robot-assisted fracture surgery was developed and tested. The control strategy was accomplished by merging fast open-loop control with vision-based control. This two-phase process is designed to eliminate the open-loop positioning errors by closing the control loop using visual feedback provided by an optical tracking system. Evaluation of the control system accuracy was performed using robot positioning trials, and fracture reduction accuracy was tested in trials on ex vivo porcine model. The system resulted in high fracture reduction reliability with a reduction accuracy of 0.09 mm (translations) and of [Formula: see text] (rotations), maximum observed errors in the order of 0.12 mm (translations) and of [Formula: see text] (rotations), and a reduction repeatability of 0.02 mm and [Formula: see text]. The proposed vision-based system was shown to be effective and suitable for real joint fracture surgical procedures, contributing a potential improvement of their quality.

  9. MO-AB-BRA-02: A Novel Scatter Imaging Modality for Real-Time Image Guidance During Lung SBRT

    International Nuclear Information System (INIS)

    Redler, G; Bernard, D; Templeton, A; Chu, J; Nair, C Kumaran; Turian, J

    2015-01-01

    Purpose: A novel scatter imaging modality is developed and its feasibility for image-guided radiation therapy (IGRT) during stereotactic body radiation therapy (SBRT) for lung cancer patients is assessed using analytic and Monte Carlo models as well as experimental testing. Methods: During treatment, incident radiation interacts and scatters from within the patient. The presented methodology forms an image of patient anatomy from the scattered radiation for real-time localization of the treatment target. A radiographic flat panel-based pinhole camera provides spatial information regarding the origin of detected scattered radiation. An analytical model is developed, which provides a mathematical formalism for describing the scatter imaging system. Experimental scatter images are acquired by irradiating an object using a Varian TrueBeam accelerator. The differentiation between tissue types is investigated by imaging simple objects of known compositions (water, lung, and cortical bone equivalent). A lung tumor phantom, simulating materials and geometry encountered during lung SBRT treatments, is fabricated and imaged to investigate image quality for various quantities of delivered radiation. Monte Carlo N-Particle (MCNP) code is used for validation and testing by simulating scatter image formation using the experimental pinhole camera setup. Results: Analytical calculations, MCNP simulations, and experimental results when imaging the water, lung, and cortical bone equivalent objects show close agreement, thus validating the proposed models and demonstrating that scatter imaging differentiates these materials well. Lung tumor phantom images have sufficient contrast-to-noise ratio (CNR) to clearly distinguish tumor from surrounding lung tissue. CNR=4.1 and CNR=29.1 for 10MU and 5000MU images (equivalent to 0.5 and 250 second images), respectively. Conclusion: Lung SBRT provides favorable treatment outcomes, but depends on accurate target localization. A comprehensive

  10. MO-AB-BRA-02: A Novel Scatter Imaging Modality for Real-Time Image Guidance During Lung SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Redler, G; Bernard, D; Templeton, A; Chu, J [Rush University Medical Center, Chicago, IL (United States); Nair, C Kumaran [University of Chicago, Chicago, IL (United States); Turian, J [Rush University Medical Center, Chicago, IL (United States); Rush Radiosurgery LLC, Chicago, IL (United States)

    2015-06-15

    Purpose: A novel scatter imaging modality is developed and its feasibility for image-guided radiation therapy (IGRT) during stereotactic body radiation therapy (SBRT) for lung cancer patients is assessed using analytic and Monte Carlo models as well as experimental testing. Methods: During treatment, incident radiation interacts and scatters from within the patient. The presented methodology forms an image of patient anatomy from the scattered radiation for real-time localization of the treatment target. A radiographic flat panel-based pinhole camera provides spatial information regarding the origin of detected scattered radiation. An analytical model is developed, which provides a mathematical formalism for describing the scatter imaging system. Experimental scatter images are acquired by irradiating an object using a Varian TrueBeam accelerator. The differentiation between tissue types is investigated by imaging simple objects of known compositions (water, lung, and cortical bone equivalent). A lung tumor phantom, simulating materials and geometry encountered during lung SBRT treatments, is fabricated and imaged to investigate image quality for various quantities of delivered radiation. Monte Carlo N-Particle (MCNP) code is used for validation and testing by simulating scatter image formation using the experimental pinhole camera setup. Results: Analytical calculations, MCNP simulations, and experimental results when imaging the water, lung, and cortical bone equivalent objects show close agreement, thus validating the proposed models and demonstrating that scatter imaging differentiates these materials well. Lung tumor phantom images have sufficient contrast-to-noise ratio (CNR) to clearly distinguish tumor from surrounding lung tissue. CNR=4.1 and CNR=29.1 for 10MU and 5000MU images (equivalent to 0.5 and 250 second images), respectively. Conclusion: Lung SBRT provides favorable treatment outcomes, but depends on accurate target localization. A comprehensive

  11. Volumetric real-time imaging using a CMUT ring array.

    Science.gov (United States)

    Choe, Jung Woo; Oralkan, Ömer; Nikoozadeh, Amin; Gencel, Mustafa; Stephens, Douglas N; O'Donnell, Matthew; Sahn, David J; Khuri-Yakub, Butrus T

    2012-06-01

    A ring array provides a very suitable geometry for forward-looking volumetric intracardiac and intravascular ultrasound imaging. We fabricated an annular 64-element capacitive micromachined ultrasonic transducer (CMUT) array featuring a 10-MHz operating frequency and a 1.27-mm outer radius. A custom software suite was developed to run on a PC-based imaging system for real-time imaging using this device. This paper presents simulated and experimental imaging results for the described CMUT ring array. Three different imaging methods--flash, classic phased array (CPA), and synthetic phased array (SPA)--were used in the study. For SPA imaging, two techniques to improve the image quality--Hadamard coding and aperture weighting--were also applied. The results show that SPA with Hadamard coding and aperture weighting is a good option for ring-array imaging. Compared with CPA, it achieves better image resolution and comparable signal-to-noise ratio at a much faster image acquisition rate. Using this method, a fast frame rate of up to 463 volumes per second is achievable if limited only by the ultrasound time of flight; with the described system we reconstructed three cross-sectional images in real-time at 10 frames per second, which was limited by the computation time in synthetic beamforming.

  12. Real-time object tracking system based on field-programmable gate array and convolution neural network

    Directory of Open Access Journals (Sweden)

    Congyi Lyu

    2016-12-01

    Full Text Available Vision-based object tracking has lots of applications in robotics, like surveillance, navigation, motion capturing, and so on. However, the existing object tracking systems still suffer from the challenging problem of high computation consumption in the image processing algorithms. The problem can prevent current systems from being used in many robotic applications which have limitations of payload and power, for example, micro air vehicles. In these applications, the central processing unit- or graphics processing unit-based computers are not good choices due to the high weight and power consumption. To address the problem, this article proposed a real-time object tracking system based on field-programmable gate array, convolution neural network, and visual servo technology. The time-consuming image processing algorithms, such as distortion correction, color space convertor, and Sobel edge, Harris corner features detector, and convolution neural network were redesigned using the programmable gates in field-programmable gate array. Based on the field-programmable gate array-based image processing, an image-based visual servo controller was designed to drive a two degree of freedom manipulator to track the target in real time. Finally, experiments on the proposed system were performed to illustrate the effectiveness of the real-time object tracking system.

  13. Real-time soft x-ray imaging on composite materials

    International Nuclear Information System (INIS)

    Polichar, R.

    1985-01-01

    The increased use of composite materials in aircraft structures has emphasized many of the unique and difficult aspects of the inspection of such components. Ultrasound has been extensively applied to certain configurations since it is relatively sensitive to laminar discontinuities in structure. Conversely, the use of conventional x-ray examination has been severely hampered by the fact that these composite materials are virtually transparent to the x-ray energies commonly encountered in industrial radiography (25 kv and above). To produce images with contrast approaching conventional radiography, one must use x-ray beams with average energies below 10 KEV where the absorption coefficients begin to rise rapidly for these low atomic number materials. This new regime of soft x-rays presents a major challenge to real-time imaging components. Special screen and window technology is required if these lower energy x-rays are to be effectively detected. Moreover, conventional x-ray tubes become very inefficient for generating the required x-ray flux at potentials much below 29 kv and the increased operating currents put significant limitations on conventional power sources. The purpose of this paper is to explore these special problems related to soft x-ray real-time imaging and to define the optimal technologies. Practical results obtained with the latest commerical and developmental instruments for real-time imaging will be shown. These instruments include recently developed imaging systems, new x-ray tubes and various approaches to generator design. The measured results convincingly demonstrate the effectiveness practicality of real-time soft x-ray imaging. They also indicate the major changes in technology and approach that must be taken for practical systems to be truly effective

  14. Human Exploration Using Real-Time Robotic Operations (HERRO)- Crew Telerobotic Control Vehicle (CTCV) Design

    Science.gov (United States)

    Oleson, Steven R.; McGuire, Melissa L.; Burke, Laura; Chato, David; Fincannon, James; Landis, Geoff; Sandifer, Carl; Warner, Joe; Williams, Glenn; Colozza, Tony; hide

    2010-01-01

    The HERRO concept allows real time investigation of planets and small bodies by sending astronauts to orbit these targets and telerobotically explore them using robotic systems. Several targets have been put forward by past studies including Mars, Venus, and near Earth asteroids. A conceptual design study was funded by the NASA Innovation Fund to explore what the HERRO concept and it's vehicles would look like and what technological challenges need to be met. This design study chose Mars as the target destination. In this way the HERRO studies can define the endpoint design concepts for an all-up telerobotic exploration of the number one target of interest Mars. This endpoint design will serve to help planners define combined precursor telerobotics science missions and technology development flights. A suggested set of these technologies and demonstrator missions is shown in Appendix B. The HERRO concept includes a crewed telerobotics orbit vehicle as well three Truck rovers, each supporting two teleoperated geologist robots Rockhounds (each truck/Rockhounds set is landed using a commercially launched aeroshell landing system.) Options include a sample ascent system teamed with an orbital telerobotic sample rendezvous and return spacecraft (S/C) (yet to be designed). Each truck rover would be landed in a science location with the ability to traverse a 100 km diameter area, carrying the Rockhounds to 100 m diameter science areas for several week science activities. The truck is not only responsible for transporting the Rockhounds to science areas, but also for relaying telecontrol and high-res communications to/from the Rockhound and powering/heating the Rockhound during the non-science times (including night-time). The Rockhounds take the place of human geologists by providing an agile robotic platform with real-time telerobotics control to the Rockhound from the crew telerobotics orbiter. The designs of the Truck rovers and Rockhounds will be described in other

  15. A New Navigation System of Renal Puncture for Endoscopic Combined Intrarenal Surgery: Real-time Virtual Sonography-guided Renal Access.

    Science.gov (United States)

    Hamamoto, Shuzo; Unno, Rei; Taguchi, Kazumi; Ando, Ryosuke; Hamakawa, Takashi; Naiki, Taku; Okada, Shinsuke; Inoue, Takaaki; Okada, Atsushi; Kohri, Kenjiro; Yasui, Takahiro

    2017-11-01

    To evaluate the clinical utility of a new navigation technique for percutaneous renal puncture using real-time virtual sonography (RVS) during endoscopic combined intrarenal surgery. Thirty consecutive patients who underwent endoscopic combined intrarenal surgery for renal calculi, between April 2014 and July 2015, were divided into the RVS-guided puncture (RVS; n = 15) group and the ultrasonography-guided puncture (US; n = 15) group. In the RVS group, renal puncture was repeated until precise piercing of a papilla was achieved under direct endoscopic vision, using the RVS system to synchronize the real-time US image with the preoperative computed tomography image. In the US group, renal puncture was performed under US guidance only. In both groups, 2 urologists worked simultaneously to fragment the renal calculi after inserting the miniature percutaneous tract. The mean sizes of the renal calculi in the RVS and the US group were 33.5 and 30.5 mm, respectively. A lower mean number of puncture attempts until renal access through the calyx was needed for the RVS compared with the US group (1.6 vs 3.4 times, respectively; P = .001). The RVS group had a lower mean postoperative hemoglobin decrease (0.93 vs 1.39 g/dL, respectively; P = .04), but with no between-group differences with regard to operative time, tubeless rate, and stone-free rate. None of the patients in the RVS group experienced postoperative complications of a Clavien score ≥2, with 3 patients experiencing such complications in the US group. RVS-guided renal puncture was effective, with a lower incidence of bleeding-related complications compared with US-guided puncture. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Light Robotics: an all-optical nano- and micro-toolbox

    DEFF Research Database (Denmark)

    Glückstad, Jesper; Villangca, Mark Jayson; Palima, Darwin

    2017-01-01

    potential of this new ‘drone-like’ light-driven micro-robotics in challenging microscopic geometries requires a versatile and real-time reconfigurable light addressing that can dynamically track a plurality of tiny micro-robots in 3D to ensure continuous optimal light coupling on the fly. Our latest......Recently we proposed the concept of so-called Light Robotics including the new and disruptive 3D fabricated micro-tools coined Wave-guided Optical Waveguides that can be real-time optically manipulated and remote-controlled with a joystick in a volume with six-degrees-of-freedom. Exploring the full...

  17. Digital image processing for real-time neutron radiography and its applications

    International Nuclear Information System (INIS)

    Fujine, Shigenori

    1989-01-01

    The present paper describes several digital image processing approaches for the real-time neutron radiography (neutron television-NTV), such as image integration, adaptive smoothing and image enhancement, which have beneficial effects on image improvements, and also describes how to use these techniques for applications. Details invisible in direct images of NTV are able to be revealed by digital image processing, such as reversed image, gray level correction, gray scale transformation, contoured image, subtraction technique, pseudo color display and so on. For real-time application a contouring operation and an averaging approach can also be utilized effectively. (author)

  18. Magneto-optical system for high speed real time imaging

    Science.gov (United States)

    Baziljevich, M.; Barness, D.; Sinvani, M.; Perel, E.; Shaulov, A.; Yeshurun, Y.

    2012-08-01

    A new magneto-optical system has been developed to expand the range of high speed real time magneto-optical imaging. A special source for the external magnetic field has also been designed, using a pump solenoid to rapidly excite the field coil. Together with careful modifications of the cryostat, to reduce eddy currents, ramping rates reaching 3000 T/s have been achieved. Using a powerful laser as the light source, a custom designed optical assembly, and a high speed digital camera, real time imaging rates up to 30 000 frames per seconds have been demonstrated.

  19. UWGSP7: a real-time optical imaging workstation

    Science.gov (United States)

    Bush, John E.; Kim, Yongmin; Pennington, Stan D.; Alleman, Andrew P.

    1995-04-01

    With the development of UWGSP7, the University of Washington Image Computing Systems Laboratory has a real-time workstation for continuous-wave (cw) optical reflectance imaging. Recent discoveries in optical science and imaging research have suggested potential practical use of the technology as a medical imaging modality and identified the need for a machine to support these applications in real time. The UWGSP7 system was developed to provide researchers with a high-performance, versatile tool for use in optical imaging experiments with the eventual goal of bringing the technology into clinical use. One of several major applications of cw optical reflectance imaging is tumor imaging which uses a light-absorbing dye that preferentially sequesters in tumor tissue. This property could be used to locate tumors and to identify tumor margins intraoperatively. Cw optical reflectance imaging consists of illumination of a target with a band-limited light source and monitoring the light transmitted by or reflected from the target. While continuously illuminating the target, a control image is acquired and stored. A dye is injected into a subject and a sequence of data images are acquired and processed. The data images are aligned with the control image and then subtracted to obtain a signal representing the change in optical reflectance over time. This signal can be enhanced by digital image processing and displayed in pseudo-color. This type of emerging imaging technique requires a computer system that is versatile and adaptable. The UWGSP7 utilizes a VESA local bus PC as a host computer running the Windows NT operating system and includes ICSL developed add-on boards for image acquisition and processing. The image acquisition board is used to digitize and format the analog signal from the input device into digital frames and to the average frames into images. To accommodate different input devices, the camera interface circuitry is designed in a small mezzanine board

  20. An improved optical flow tracking technique for real-time MR-guided beam therapies in moving organs

    Science.gov (United States)

    Zachiu, C.; Papadakis, N.; Ries, M.; Moonen, C.; de Senneville, B. Denis

    2015-12-01

    Magnetic resonance (MR) guided high intensity focused ultrasound and external beam radiotherapy interventions, which we shall refer to as beam therapies/interventions, are promising techniques for the non-invasive ablation of tumours in abdominal organs. However, therapeutic energy delivery in these areas becomes challenging due to the continuous displacement of the organs with respiration. Previous studies have addressed this problem by coupling high-framerate MR-imaging with a tracking technique based on the algorithm proposed by Horn and Schunck (H and S), which was chosen due to its fast convergence rate and highly parallelisable numerical scheme. Such characteristics were shown to be indispensable for the real-time guidance of beam therapies. In its original form, however, the algorithm is sensitive to local grey-level intensity variations not attributed to motion such as those that occur, for example, in the proximity of pulsating arteries. In this study, an improved motion estimation strategy which reduces the impact of such effects is proposed. Displacements are estimated through the minimisation of a variation of the H and S functional for which the quadratic data fidelity term was replaced with a term based on the linear L1norm, resulting in what we have called an L2-L1 functional. The proposed method was tested in the livers and kidneys of two healthy volunteers under free-breathing conditions, on a data set comprising 3000 images equally divided between the volunteers. The results show that, compared to the existing approaches, our method demonstrates a greater robustness to local grey-level intensity variations introduced by arterial pulsations. Additionally, the computational time required by our implementation make it compatible with the work-flow of real-time MR-guided beam interventions. To the best of our knowledge this study was the first to analyse the behaviour of an L1-based optical flow functional in an applicative context: real-time MR

  1. Multi-GPU based acceleration of a list-mode DRAMA toward real-time OpenPET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kinouchi, Shoko [Chiba Univ. (Japan); National Institute of Radiological Sciences, Chiba (Japan); Yamaya, Taiga; Yoshida, Eiji; Tashima, Hideaki [National Institute of Radiological Sciences, Chiba (Japan); Kudo, Hiroyuki [Tsukuba Univ., Ibaraki (Japan); Suga, Mikio [Chiba Univ. (Japan)

    2011-07-01

    OpenPET, which has a physical gap between two detector rings, is our new PET geometry. In order to realize future radiation therapy guided by OpenPET, real-time imaging is required. Therefore we developed a list-mode image reconstruction method using general purpose graphic processing units (GPUs). For GPU implementation, the efficiency of acceleration depends on the implementation method which is required to avoid conditional statements. Therefore, in our previous study, we developed a new system model which was suited for the GPU implementation. In this paper, we implemented our image reconstruction method using 4 GPUs to get further acceleration. We applied the developed reconstruction method to a small OpenPET prototype. We obtained calculation times of total iteration using 4 GPUs that were 3.4 times faster than using a single GPU. Compared to using a single CPU, we achieved the reconstruction time speed-up of 142 times using 4 GPUs. (orig.)

  2. Needle-tissue interactive mechanism and steering control in image-guided robot-assisted minimally invasive surgery: a review.

    Science.gov (United States)

    Li, Pan; Yang, Zhiyong; Jiang, Shan

    2018-06-01

    Image-guided robot-assisted minimally invasive surgery is an important medicine procedure used for biopsy or local target therapy. In order to reach the target region not accessible using traditional techniques, long and thin flexible needles are inserted into the soft tissue which has large deformation and nonlinear characteristics. However, the detection results and therapeutic effect are directly influenced by the targeting accuracy of needle steering. For this reason, the needle-tissue interactive mechanism, path planning, and steering control are investigated in this review by searching literatures in the last 10 years, which results in a comprehensive overview of the existing techniques with the main accomplishments, limitations, and recommendations. Through comprehensive analyses, surgical simulation for insertion into multi-layer inhomogeneous tissue is verified as a primary and propositional aspect to be explored, which accurately predicts the nonlinear needle deflection and tissue deformation. Investigation of the path planning of flexible needles is recommended to an anatomical or a deformable environment which has characteristics of the tissue deformation. Nonholonomic modeling combined with duty-cycled spinning for needle steering, which tracks the tip position in real time and compensates for the deviation error, is recommended as a future research focus in the steering control in anatomical and deformable environments. Graphical abstract a Insertion force when the needle is inserted into soft tissue. b Needle deflection model when the needle is inserted into soft tissue [68]. c Path planning in anatomical environments [92]. d Duty-cycled spinning incorporated in nonholonomic needle steering [64].

  3. Real-time underwater image enhancement: An improved approach ...

    Indian Academy of Sciences (India)

    1School of Mechatronics, CSIR-Central Mechanical Engineering Research Institute, Durgapur 713209, India. 2Robotics and ...... a general purpose computer with Intel core i3 processor, frequency 2.20 ... Adobe Photoshop CS4 software. Table 6. .... 1In general, vision (e.g. camera) aided navigation requires on-board real-.

  4. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    International Nuclear Information System (INIS)

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom; Lee, Jae Young

    2015-01-01

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  5. Contrast-guided image interpolation.

    Science.gov (United States)

    Wei, Zhe; Ma, Kai-Kuang

    2013-11-01

    In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.

  6. Real-time ultrasound-guided spinal anaesthesia: a prospective observational study of a new approach.

    LENUS (Irish Health Repository)

    Conroy, P H

    2013-01-01

    Identification of the subarachnoid space has traditionally been achieved by either a blind landmark-guided approach or using prepuncture ultrasound assistance. To assess the feasibility of performing spinal anaesthesia under real-time ultrasound guidance in routine clinical practice we conducted a single center prospective observational study among patients undergoing lower limb orthopaedic surgery. A spinal needle was inserted unassisted within the ultrasound transducer imaging plane using a paramedian approach (i.e., the operator held the transducer in one hand and the spinal needle in the other). The primary outcome measure was the success rate of CSF acquisition under real-time ultrasound guidance with CSF being located in 97 out of 100 consecutive patients within median three needle passes (IQR 1-6). CSF was not acquired in three patients. Subsequent attempts combining landmark palpation and pre-puncture ultrasound scanning resulted in successful spinal anaesthesia in two of these patients with the third patient requiring general anaesthesia. Median time from spinal needle insertion until intrathecal injection completion was 1.2 minutes (IQR 0.83-4.1) demonstrating the feasibility of this technique in routine clinical practice.

  7. Real-time virtual sonography for navigation during targeted prostate biopsy using magnetic resonance imaging data

    International Nuclear Information System (INIS)

    Miyagawa, Tomoaki; Ishikawa, Satoru; Kimura, Tomokazu; Suetomi, Takahiro; Tsutsumi, Masakazu; Irie, Toshiyuki; Kondoh, Masanao; Mitake, Tsuyoshi

    2010-01-01

    The objective of this study was to evaluate the effectiveness of the medical navigation technique, namely, Real-time Virtual Sonography (RVS), for targeted prostate biopsy. Eighty-five patients with suspected prostate cancer lesions using magnetic resonance imaging (MRI) were included in this study. All selected patients had at least one negative result on the previous transrectal biopsies. The acquired MRI volume data were loaded onto a personal computer installed with RVS software, which registers the volumes between MRI and real-time ultrasound data for real-time display. The registered MRI images were displayed adjacent to the ultrasonographic sagittal image on the same computer monitor. The suspected lesions on T2-weighted images were marked with a red circle. At first suspected lesions were biopsied transperineally under real-time navigation with RVS and then followed by the conventional transrectal and transperineal biopsy under spinal anesthesia. The median age of the patients was 69 years (56-84 years), and the prostate-specific antigen level and prostate volume were 9.9 ng/mL (4.0-34.2) and 37.2 mL (18-141), respectively. Prostate cancer was detected in 52 patients (61%). The biopsy specimens obtained using RVS revealed 45/52 patients (87%) positive for prostate cancer. A total of 192 biopsy cores were obtained using RVS. Sixty-two of these (32%) were positive for prostate cancer, whereas conventional random biopsy revealed cancer only in 75/833 (9%) cores (P<0.01). Targeted prostate biopsy with RVS is very effective to diagnose lesions detected with MRI. This technique only requires additional computer and RVS software and thus is cost-effective. Therefore, RVS-guided prostate biopsy has great potential for better management of prostate cancer patients. (author)

  8. Handheld real-time volumetric 3-D gamma-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Haefner, Andrew, E-mail: ahaefner@lbl.gov [Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Barnowski, Ross [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720 (United States); Luke, Paul; Amman, Mark [Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Vetter, Kai [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720 (United States); Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States)

    2017-06-11

    This paper presents the concept of real-time fusion of gamma-ray imaging and visual scene data for a hand-held mobile Compton imaging system in 3-D. The ability to obtain and integrate both gamma-ray and scene data from a mobile platform enables improved capabilities in the localization and mapping of radioactive materials. This not only enhances the ability to localize these materials, but it also provides important contextual information of the scene which once acquired can be reviewed and further analyzed subsequently. To demonstrate these concepts, the high-efficiency multimode imager (HEMI) is used in a hand-portable implementation in combination with a Microsoft Kinect sensor. This sensor, in conjunction with open-source software, provides the ability to create a 3-D model of the scene and to track the position and orientation of HEMI in real-time. By combining the gamma-ray data and visual data, accurate 3-D maps of gamma-ray sources are produced in real-time. This approach is extended to map the location of radioactive materials within objects with unknown geometry.

  9. CAD2RL: Real Single-Image Flight without a Single Real Image

    OpenAIRE

    Sadeghi, Fereshteh; Levine, Sergey

    2016-01-01

    Deep reinforcement learning has emerged as a promising and powerful technique for automatically acquiring control policies that can process raw sensory inputs, such as images, and perform complex behaviors. However, extending deep RL to real-world robotic tasks has proven challenging, particularly in safety-critical domains such as autonomous flight, where a trial-and-error learning process is often impractical. In this paper, we explore the following question: can we train vision-based navig...

  10. Real-time image fusion involving diagnostic ultrasound

    DEFF Research Database (Denmark)

    Ewertsen, Caroline; Săftoiu, Adrian; Gruionu, Lucian G

    2013-01-01

    The aim of our article is to give an overview of the current and future possibilities of real-time image fusion involving ultrasound. We present a review of the existing English-language peer-reviewed literature assessing this technique, which covers technical solutions (for ultrasound...

  11. In vivo quantification of fluorescent molecular markers in real-time by ratio Imaging for diagnostic screening and image-guided surgery

    NARCIS (Netherlands)

    Bogaards, A.; Sterenborg, H. J. C. M.; Trachtenberg, J.; Wilson, B. C.; Lilge, L.

    2007-01-01

    Future applications of "molecular diagnostic screening" and "molecular image-guided surgery" will demand images of molecular markers with high resolution and high throughput (similar to >= 30 frames/second). MRI, SPECT, PET, optical fluorescence tomography, hyper-spectral fluorescence imaging, and

  12. Real-time biscuit tile image segmentation method based on edge detection.

    Science.gov (United States)

    Matić, Tomislav; Aleksi, Ivan; Hocenski, Željko; Kraus, Dieter

    2018-05-01

    In this paper we propose a novel real-time Biscuit Tile Segmentation (BTS) method for images from ceramic tile production line. BTS method is based on signal change detection and contour tracing with a main goal of separating tile pixels from background in images captured on the production line. Usually, human operators are visually inspecting and classifying produced ceramic tiles. Computer vision and image processing techniques can automate visual inspection process if they fulfill real-time requirements. Important step in this process is a real-time tile pixels segmentation. BTS method is implemented for parallel execution on a GPU device to satisfy the real-time constraints of tile production line. BTS method outperforms 2D threshold-based methods, 1D edge detection methods and contour-based methods. Proposed BTS method is in use in the biscuit tile production line. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Real-time single image dehazing based on dark channel prior theory and guided filtering

    Science.gov (United States)

    Zhang, Zan

    2017-10-01

    Images and videos taken outside the foggy day are serious degraded. In order to restore degraded image taken in foggy day and overcome traditional Dark Channel prior algorithms problems of remnant fog in edge, we propose a new dehazing method.We first find the fog area in the dark primary color map to obtain the estimated value of the transmittance using quadratic tree. Then we regard the gray-scale image after guided filtering as atmospheric light map and remove haze based on it. Box processing and image down sampling technology are also used to improve the processing speed. Finally, the atmospheric light scattering model is used to restore the image. A plenty of experiments show that algorithm is effective, efficient and has a wide range of application.

  14. A customizable system for real-time image processing using the Blackfin DSProcessor and the MicroC/OS-II real-time kernel

    Science.gov (United States)

    Coffey, Stephen; Connell, Joseph

    2005-06-01

    This paper presents a development platform for real-time image processing based on the ADSP-BF533 Blackfin processor and the MicroC/OS-II real-time operating system (RTOS). MicroC/OS-II is a completely portable, ROMable, pre-emptive, real-time kernel. The Blackfin Digital Signal Processors (DSPs), incorporating the Analog Devices/Intel Micro Signal Architecture (MSA), are a broad family of 16-bit fixed-point products with a dual Multiply Accumulate (MAC) core. In addition, they have a rich instruction set with variable instruction length and both DSP and MCU functionality thus making them ideal for media based applications. Using the MicroC/OS-II for task scheduling and management, the proposed system can capture and process raw RGB data from any standard 8-bit greyscale image sensor in soft real-time and then display the processed result using a simple PC graphical user interface (GUI). Additionally, the GUI allows configuration of the image capture rate and the system and core DSP clock rates thereby allowing connectivity to a selection of image sensors and memory devices. The GUI also allows selection from a set of image processing algorithms based in the embedded operating system.

  15. Real-time automatic fiducial marker tracking in low contrast cine-MV images

    International Nuclear Information System (INIS)

    Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang; Liou, Shu-Cheng; Nath, Ravinder; Liu Wu

    2013-01-01

    Purpose: To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). Methods: Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle. While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. Results: The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The

  16. Real-time automatic fiducial marker tracking in low contrast cine-MV images

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Wei-Yang; Lin, Shu-Fang; Yang, Sheng-Chang; Liou, Shu-Cheng; Nath, Ravinder; Liu Wu [Department of Computer Science and Information Engineering, National Chung Cheng University, Taiwan, 62102 (China); Department of Therapeutic Radiology, Yale University School of Medicine, New Haven, Connecticut 06510-3220 (United States)

    2013-01-15

    Purpose: To develop a real-time automatic method for tracking implanted radiographic markers in low-contrast cine-MV patient images used in image-guided radiation therapy (IGRT). Methods: Intrafraction motion tracking using radiotherapy beam-line MV images have gained some attention recently in IGRT because no additional imaging dose is introduced. However, MV images have much lower contrast than kV images, therefore a robust and automatic algorithm for marker detection in MV images is a prerequisite. Previous marker detection methods are all based on template matching or its derivatives. Template matching needs to match object shape that changes significantly for different implantation and projection angle. While these methods require a large number of templates to cover various situations, they are often forced to use a smaller number of templates to reduce the computation load because their methods all require exhaustive search in the region of interest. The authors solve this problem by synergetic use of modern but well-tested computer vision and artificial intelligence techniques; specifically the authors detect implanted markers utilizing discriminant analysis for initialization and use mean-shift feature space analysis for sequential tracking. This novel approach avoids exhaustive search by exploiting the temporal correlation between consecutive frames and makes it possible to perform more sophisticated detection at the beginning to improve the accuracy, followed by ultrafast sequential tracking after the initialization. The method was evaluated and validated using 1149 cine-MV images from two prostate IGRT patients and compared with manual marker detection results from six researchers. The average of the manual detection results is considered as the ground truth for comparisons. Results: The average root-mean-square errors of our real-time automatic tracking method from the ground truth are 1.9 and 2.1 pixels for the two patients (0.26 mm/pixel). The

  17. Two-dimensional random arrays for real time volumetric imaging

    DEFF Research Database (Denmark)

    Davidsen, Richard E.; Jensen, Jørgen Arendt; Smith, Stephen W.

    1994-01-01

    real time volumetric imaging system, which employs a wide transmit beam and receive mode parallel processing to increase image frame rate. Depth-of-field comparisons were made from simulated on-axis and off-axis beamplots at ranges from 30 to 160 mm for both coaxial and offset transmit and receive......Two-dimensional arrays are necessary for a variety of ultrasonic imaging techniques, including elevation focusing, 2-D phase aberration correction, and real time volumetric imaging. In order to reduce system cost and complexity, sparse 2-D arrays have been considered with element geometries...... selected ad hoc, by algorithm, or by random process. Two random sparse array geometries and a sparse array with a Mills cross receive pattern were simulated and compared to a fully sampled aperture with the same overall dimensions. The sparse arrays were designed to the constraints of the Duke University...

  18. HYBRID COMMUNICATION NETWORK OF MOBILE ROBOT AND QUAD-COPTER

    Directory of Open Access Journals (Sweden)

    Moustafa M. Kurdi

    2017-01-01

    Full Text Available This paper introduces the design and development of QMRS (Quadcopter Mobile Robotic System. QMRS is a real-time obstacle avoidance capability in Belarus-132N mobile robot with the cooperation of quadcopter Phantom-4. The function of QMRS consists of GPS used by Mobile Robot and image vision and image processing system from both robot and quad-copter and by using effective searching algorithm embedded inside the robot. Having the capacity to navigate accurately is one of the major abilities of a mobile robot to effectively execute a variety of jobs including manipulation, docking, and transportation. To achieve the desired navigation accuracy, mobile robots are typically equipped with on-board sensors to observe persistent features in the environment, to estimate their pose from these observations, and to adjust their motion accordingly. Quadcopter takes off from Mobile Robot, surveys the terrain and transmits the processed Image terrestrial robot. The main objective of research paper is to focus on the full coordination between robot and quadcopter by designing an efficient wireless communication using WIFI. In addition, it identify the method involving the use of vision and image processing system from both robot and quadcopter; analyzing path in real-time and avoiding obstacles based-on the computational algorithm embedded inside the robot. QMRS increases the efficiency and reliability of the whole system especially in robot navigation, image processing and obstacle avoidance due to the help and connection among the different parts of the system.

  19. A novel magnetic resonance imaging-compatible motor control method for image-guided robotic surgery

    International Nuclear Information System (INIS)

    Suzuki, Takashi; Liao, Hongen; Kobayashi, Etsuko; Sakuma, Ichiro

    2006-01-01

    For robotic surgery assistance systems that use magnetic resonance imaging (MRI) for guidance, the problem of electromagnetic interference is common. Image quality is particularly degraded if motors are running during scanning. We propose a novel MRI-compatible method considering the pulse sequence of imaging. Motors are driven for a short time when the MRI system stops signal acquisition (i.e., awaiting relaxation of the proton), so the image does not contain noise from the actuators. The MRI system and motor are synchronized using a radio frequency pulse signal (8.5 MHz) as the trigger, which is acquired via a special antenna mounted near the scanner. This method can be widely applied because it only receives part of the scanning signal and neither hardware nor software of the MRI system needs to be changed. As a feasibility evaluation test, we compared the images and signal-to-noise ratios between the cases with and without this method, under the condition that a piezoelectric motor was driven during scanning as a noise source, which was generally used as a MRI-compatible actuator. The results showed no deterioration in image quality and the benefit of the new method even though the choice of available scanning sequences is limited. (author)

  20. Real-Time Control of an Exoskeleton Hand Robot with Myoelectric Pattern Recognition.

    Science.gov (United States)

    Lu, Zhiyuan; Chen, Xiang; Zhang, Xu; Tong, Kay-Yu; Zhou, Ping

    2017-08-01

    Robot-assisted training provides an effective approach to neurological injury rehabilitation. To meet the challenge of hand rehabilitation after neurological injuries, this study presents an advanced myoelectric pattern recognition scheme for real-time intention-driven control of a hand exoskeleton. The developed scheme detects and recognizes user's intention of six different hand motions using four channels of surface electromyography (EMG) signals acquired from the forearm and hand muscles, and then drives the exoskeleton to assist the user accomplish the intended motion. The system was tested with eight neurologically intact subjects and two individuals with spinal cord injury (SCI). The overall control accuracy was [Formula: see text] for the neurologically intact subjects and [Formula: see text] for the SCI subjects. The total lag of the system was approximately 250[Formula: see text]ms including data acquisition, transmission and processing. One SCI subject also participated in training sessions in his second and third visits. Both the control accuracy and efficiency tended to improve. These results show great potential for applying the advanced myoelectric pattern recognition control of the wearable robotic hand system toward improving hand function after neurological injuries.

  1. An automated robot arm system for small animal tissue biopsy under dual-image modality

    International Nuclear Information System (INIS)

    Huang, Y.H.; Wu, T.H.; Lin, M.H.; Yang, C.C.; Guo, W.Y.; Wang, Z.J.; Chen, C.L.; Lee, J.S.

    2006-01-01

    The ability to non-invasively monitor cell biology in vivo is one of the most important goals of molecular imaging. Imaging procedures could be inter-subject performed repeatedly at different investigating stages; thereby need not sacrifice small animals during the entire study period. Thus, the ultimate goal of this study was to design a stereotactic image-guided system for small animals and integrated it with an automatic robot arm for in vivo tissue biopsy analysis. The system was composed of three main parts, including one small animal stereotactic frame, one imaging-fusion software and an automatic robot arm system. The system has been thoroughly evaluated with three components; the robot position accuracy was 0.05±0.02 mm, the image registration accuracy was 0.37±0.18 mm and the system integration was satisfactorily within 1.20±0.39 mm of error. From these results, the system demonstrated sufficient accuracy to guide the micro-injector from the planned delivery routes into practice. The entire system accuracy was limited by the image fusion and orientation procedures, due to its nature of the blurred PET imaging obtained from the small objects. The primary improvement is to acquire as higher resolution as possible the fused imaging for localizing the targets in the future

  2. Ultrasonic image analysis and image-guided interventions.

    Science.gov (United States)

    Noble, J Alison; Navab, Nassir; Becher, H

    2011-08-06

    The fields of medical image analysis and computer-aided interventions deal with reducing the large volume of digital images (X-ray, computed tomography, magnetic resonance imaging (MRI), positron emission tomography and ultrasound (US)) to more meaningful clinical information using software algorithms. US is a core imaging modality employed in these areas, both in its own right and used in conjunction with the other imaging modalities. It is receiving increased interest owing to the recent introduction of three-dimensional US, significant improvements in US image quality, and better understanding of how to design algorithms which exploit the unique strengths and properties of this real-time imaging modality. This article reviews the current state of art in US image analysis and its application in image-guided interventions. The article concludes by giving a perspective from clinical cardiology which is one of the most advanced areas of clinical application of US image analysis and describing some probable future trends in this important area of ultrasonic imaging research.

  3. Validation of a novel robot-assisted 3DUS system for real-time planning and guidance of breast interstitial HDR brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Poulin, Eric; Beaulieu, Luc, E-mail: Luc.Beaulieu@phy.ulaval.ca [Département de Physique, de Génie Physique et d’optique et Centre de Recherche sur le Cancer de l’Université Laval, Université Laval, Québec, Québec G1V 0A6, Canada and Département de Radio-oncologie et Axe Oncologie du Centre de Recherche du CHU de Québec, CHU de Québec, 11 Côte du Palais, Québec, Québec G1R 2J6 (Canada); Gardi, Lori; Barker, Kevin; Montreuil, Jacques; Fenster, Aaron [Imaging Research Laboratories, Robarts Research Institute, 100 Perth Drive, London, Ontario N6A 5K8 (Canada)

    2015-12-15

    Purpose: In current clinical practice, there is no integrated 3D ultrasound (3DUS) guidance system clinically available for breast brachytherapy. In this study, the authors present a novel robot-assisted 3DUS system for real-time planning and guidance of breast interstitial high dose rate (HDR) brachytherapy treatment. Methods: For this work, a new computer controlled robotic 3DUS system was built to perform a hybrid motion scan, which is a combination of a 6 cm linear translation with a 30° rotation at both ends. The new 3DUS scanner was designed to fit on a modified Kuske assembly, keeping the current template grid configuration but modifying the frame to allow the mounting of the 3DUS system at several positions. A finer grid was also tested. A user interface was developed to perform image reconstruction, semiautomatic segmentation of the surgical bed as well as catheter reconstruction and tracking. A 3D string phantom was used to validate the geometric accuracy of the reconstruction. The volumetric accuracy of the system was validated with phantoms using magnetic resonance imaging (MRI) and computed tomography (CT) images. In order to accurately determine whether 3DUS can effectively replace CT for treatment planning, the authors have compared the 3DUS catheter reconstruction to the one obtained from CT images. In addition, in agarose-based phantoms, an end-to-end procedure was performed by executing six independent complete procedures with both 14 and 16 catheters, and for both standard and finer Kuske grids. Finally, in phantoms, five end-to-end procedures were performed with the final CT planning for the validation of 3DUS preplanning. Results: The 3DUS acquisition time is approximately 10 s. A paired Student t-test showed that there was no statistical significant difference between known and measured values of string separations in each direction. Both MRI and CT volume measurements were not statistically different from 3DUS volume (Student t-test: p > 0

  4. TU-A-304-01: Introduction and Workflow of Image-Guided SBRT

    International Nuclear Information System (INIS)

    Salter, B.

    2015-01-01

    Increased use of SBRT and hypo fractionation in radiation oncology practice has posted a number of challenges to medical physicist, ranging from planning, image-guided patient setup and on-treatment monitoring, to quality assurance (QA) and dose delivery. This symposium is designed to provide updated knowledge necessary for the safe and efficient implementation of SBRT in various linac platforms, including the emerging digital linacs equipped with high dose rate FFF beams. Issues related to 4D CT, PET and MRI simulations, 3D/4D CBCT guided patient setup, real-time image guidance during SBRT dose delivery using gated/un-gated VMAT or IMRT, and technical advancements in QA of SBRT (in particular, strategies dealing with high dose rate FFF beams) will be addressed. The symposium will help the attendees to gain a comprehensive understanding of the SBRT workflow and facilitate their clinical implementation of the state-of-art imaging and planning techniques. Learning Objectives: Present background knowledge of SBRT, describe essential requirements for safe implementation of SBRT, and discuss issues specific to SBRT treatment planning and QA. Update on the use of multi-dimensional (3D and 4D) and multi-modality (CT, beam-level X-ray imaging, pre- and on-treatment 3D/4D MRI, PET, robotic ultrasound, etc.) for reliable guidance of SBRT. Provide a comprehensive overview of emerging digital linacs and summarize the key geometric and dosimetric features of the new generation of linacs for substantially improved SBRT. Discuss treatment planning and quality assurance issues specific to SBRT. Research grant from Varian Medical Systems

  5. TU-A-304-01: Introduction and Workflow of Image-Guided SBRT

    Energy Technology Data Exchange (ETDEWEB)

    Salter, B. [University of Utah Huntsman Cancer Institute (United States)

    2015-06-15

    Increased use of SBRT and hypo fractionation in radiation oncology practice has posted a number of challenges to medical physicist, ranging from planning, image-guided patient setup and on-treatment monitoring, to quality assurance (QA) and dose delivery. This symposium is designed to provide updated knowledge necessary for the safe and efficient implementation of SBRT in various linac platforms, including the emerging digital linacs equipped with high dose rate FFF beams. Issues related to 4D CT, PET and MRI simulations, 3D/4D CBCT guided patient setup, real-time image guidance during SBRT dose delivery using gated/un-gated VMAT or IMRT, and technical advancements in QA of SBRT (in particular, strategies dealing with high dose rate FFF beams) will be addressed. The symposium will help the attendees to gain a comprehensive understanding of the SBRT workflow and facilitate their clinical implementation of the state-of-art imaging and planning techniques. Learning Objectives: Present background knowledge of SBRT, describe essential requirements for safe implementation of SBRT, and discuss issues specific to SBRT treatment planning and QA. Update on the use of multi-dimensional (3D and 4D) and multi-modality (CT, beam-level X-ray imaging, pre- and on-treatment 3D/4D MRI, PET, robotic ultrasound, etc.) for reliable guidance of SBRT. Provide a comprehensive overview of emerging digital linacs and summarize the key geometric and dosimetric features of the new generation of linacs for substantially improved SBRT. Discuss treatment planning and quality assurance issues specific to SBRT. Research grant from Varian Medical Systems.

  6. Just-in-time tomography (JiTT): a new concept for image-guided radiation therapy

    International Nuclear Information System (INIS)

    Pang, G; Rowlands, J A

    2005-01-01

    Soft-tissue target motion is one of the main concerns in high-precision radiation therapy. Cone beam computed tomography (CBCT) has been developed recently to image soft-tissue targets in the treatment room and guide the radiation therapy treatment. However, due to its relatively long image acquisition time the CBCT approach cannot provide images of the target at the instant of the treatment and thus it is not adequate for imaging targets with intrafraction motion. In this note, a new approach for image-guided radiation therapy-just-in-time tomography (JiTT)-is proposed. Differing from CBCT, JiTT takes much less time to generate the needed tomographical, beam's-eye-view images of the treatment target at the right moment to guide the radiation therapy treatment. (note)

  7. Helicopter Flight Test of a Compact, Real-Time 3-D Flash Lidar for Imaging Hazardous Terrain During Planetary Landing

    Science.gov (United States)

    Roback, VIncent E.; Amzajerdian, Farzin; Brewster, Paul F.; Barnes, Bruce W.; Kempton, Kevin S.; Reisse, Robert A.; Bulyshev, Alexander E.

    2013-01-01

    -scanning mode in which successive, gimbaled images of the hazard field are mosaicked together as well as in a wider, 4.85deg FOV staring mode in which digital magnification, via a novel 3-D superresolution technique, is used to effectively achieve the same spatial precision attained with the more narrow FOV optics. The lidar generates calibrated and corrected 3-D range images of the hazard field in real-time and passes them to the ALHAT Hazard Detection System (HDS) which stitches the images together to generate on-the-fly Digital Elevation Maps (DEM's) and identifies hazards and safe-landing sites which the ALHAT GN&C system can then use to guide the host vehicle to a safe landing on the selected site. Results indicate that, for the KSC hazard field, the lidar operational range extends from 100m to 1.35 km for a 30 degree line-of-sight angle and a range precision as low as 8 cm which permits hazards as small as 25 cm to be identified. Based on the Flash Lidar images, the HDS correctly found and reported safe sites in near-real-time during several of the flights. A follow-on field test, planned for 2013, seeks to complete the closing of the GN&C loop for fully-autonomous operations on-board the Morpheus robotic, rocket-powered, free-flyer test bed in which the ALHAT system would scan the KSC hazard field (which was vetted during the present testing) and command the vehicle to landing on one of the selected safe sites.

  8. Modelling and Scheduling Autonomous Mobile Robot for a Real-World Industrial Application

    DEFF Research Database (Denmark)

    Dang, Vinh Quang; Nielsen, Izabela Ewa; Bøgh, Simon

    2013-01-01

    proposes an approach composing of: a mobile robot system design (“Little Helper”), an appropriate and comprehensive industrial application (multiple-part feeding tasks), an implementation concept for industrial environments (the bartender concept), and a real-time heuristics integrated into Mission...... from the real-time heuristics. The results also demonstrated that the proposed real-time heuristics has capability of finding the best schedule in online production mode....

  9. Using Opaque Image Blur for Real-Time Depth-of-Field Rendering and Image-Based Motion Blur

    DEFF Research Database (Denmark)

    Kraus, Martin

    2013-01-01

    While depth of field is an important cinematographic means, its use in real-time computer graphics is still limited by the computational costs that are necessary to achieve a sufficient image quality. Specifically, color bleeding artifacts between objects at different depths are most effectively...... that the opaque image blur can also be used to add motion blur effects to images in real time....

  10. Real-time Accurate Surface Reconstruction Pipeline for Vision Guided Planetary Exploration Using Unmanned Ground and Aerial Vehicles

    Science.gov (United States)

    Almeida, Eduardo DeBrito

    2012-01-01

    This report discusses work completed over the summer at the Jet Propulsion Laboratory (JPL), California Institute of Technology. A system is presented to guide ground or aerial unmanned robots using computer vision. The system performs accurate camera calibration, camera pose refinement and surface extraction from images collected by a camera mounted on the vehicle. The application motivating the research is planetary exploration and the vehicles are typically rovers or unmanned aerial vehicles. The information extracted from imagery is used primarily for navigation, as robot location is the same as the camera location and the surfaces represent the terrain that rovers traverse. The processed information must be very accurate and acquired very fast in order to be useful in practice. The main challenge being addressed by this project is to achieve high estimation accuracy and high computation speed simultaneously, a difficult task due to many technical reasons.

  11. Robot bicolor system

    Science.gov (United States)

    Yamaba, Kazuo

    1999-03-01

    In case of robot vision, most important problem is the processing speed of acquiring and analyzing images are less than the speed of execution of the robot. In an actual robot color vision system, it is considered that the system should be processed at real time. We guessed this problem might be solved using by the bicolor analysis technique. We have been testing a system which we hope will give us insight to the properties of bicolor vision. The experiment is used the red channel of a color CCD camera and an image from a monochromatic camera to duplicate McCann's theory. To mix the two signals together, the mono image is copied into each of the red, green and blue memory banks of the image processing board and then added the red image to the red bank. On the contrary, pure color images, red, green and blue components are obtained from the original bicolor images in the novel color system after the scaling factor is added to each RGB image. Our search for a bicolor robot vision system was entirely successful.

  12. [Experimental study of angiography using vascular interventional robot-2(VIR-2)].

    Science.gov (United States)

    Tian, Zeng-min; Lu, Wang-sheng; Liu, Da; Wang, Da-ming; Guo, Shu-xiang; Xu, Wu-yi; Jia, Bo; Zhao, De-peng; Liu, Bo; Gao, Bao-feng

    2012-06-01

    To verify the feasibility and safety of new vascular interventional robot system used in vascular interventional procedures. Vascular interventional robot type-2 (VIR-2) included master-slave parts of body propulsion system, image navigation systems and force feedback system, the catheter movement could achieve under automatic control and navigation, force feedback was integrated real-time, followed by in vitro pre-test in vascular model and cerebral angiography in dog. Surgeon controlled vascular interventional robot remotely, the catheter was inserted into the intended target, the catheter positioning error and the operation time would be evaluated. In vitro pre-test and animal experiment went well; the catheter can enter any branch of vascular. Catheter positioning error was less than 1 mm. The angiography operation in animal was carried out smoothly without complication; the success rate of the operation was 100% and the entire experiment took 26 and 30 minutes, efficiency was slightly improved compared with the VIR-1, and the time what staff exposed to the DSA machine was 0 minute. The resistance of force sensor can be displayed to the operator to provide a security guarantee for the operation. No surgical complications. VIR-2 is safe and feasible, and can achieve the catheter remote operation and angiography; the master-slave system meets the characteristics of traditional procedure. The three-dimensional image can guide the operation more smoothly; force feedback device provides remote real-time haptic information to provide security for the operation.

  13. Just-in-time tomography (JiTT): a new concept for image-guided radiation therapy

    Energy Technology Data Exchange (ETDEWEB)

    Pang, G; Rowlands, J A [Toronto-Sunnybrook Regional Cancer Centre, 2075 Bayview Avenue, Toronto M4N 3M5 (Canada); Imaging Research, Sunnybrook and Women' s College Health Sciences Centre, Departments of Radiation Oncology and Medical Biophysics, University of Toronto, Toronto (Canada)

    2005-11-07

    Soft-tissue target motion is one of the main concerns in high-precision radiation therapy. Cone beam computed tomography (CBCT) has been developed recently to image soft-tissue targets in the treatment room and guide the radiation therapy treatment. However, due to its relatively long image acquisition time the CBCT approach cannot provide images of the target at the instant of the treatment and thus it is not adequate for imaging targets with intrafraction motion. In this note, a new approach for image-guided radiation therapy-just-in-time tomography (JiTT)-is proposed. Differing from CBCT, JiTT takes much less time to generate the needed tomographical, beam's-eye-view images of the treatment target at the right moment to guide the radiation therapy treatment. (note)

  14. Volumetric Real-Time Imaging Using a CMUT Ring Array

    OpenAIRE

    Choe, Jung Woo; Oralkan, Ömer; Nikoozadeh, Amin; Gencel, Mustafa; Stephens, Douglas N.; O’Donnell, Matthew; Sahn, David J.; Khuri-Yakub, Butrus T.

    2012-01-01

    A ring array provides a very suitable geometry for forward-looking volumetric intracardiac and intravascular ultrasound imaging. We fabricated an annular 64-element capacitive micromachined ultrasonic transducer (CMUT) array featuring a 10-MHz operating frequency and a 1.27-mm outer radius. A custom software suite was developed to run on a PC-based imaging system for real-time imaging using this device.

  15. Medical Robots: Current Systems and Research Directions

    Directory of Open Access Journals (Sweden)

    Ryan A. Beasley

    2012-01-01

    Full Text Available First used medically in 1985, robots now make an impact in laparoscopy, neurosurgery, orthopedic surgery, emergency response, and various other medical disciplines. This paper provides a review of medical robot history and surveys the capabilities of current medical robot systems, primarily focusing on commercially available systems while covering a few prominent research projects. By examining robotic systems across time and disciplines, trends are discernible that imply future capabilities of medical robots, for example, increased usage of intraoperative images, improved robot arm design, and haptic feedback to guide the surgeon.

  16. Borehole images while drilling : real-time dip picking in the foothills

    Energy Technology Data Exchange (ETDEWEB)

    Dexter, D. [Schlumberger Canada Ltd., Calgary, AB (Canada); Brezsnyak, F. [Talisman Energy Inc., Calgary, AB (Canada); Roth, J. [Talisman Energy Inc., Calgary, AB (Canada)

    2008-07-01

    The Alberta Foothills drilling environment is a structurally complex thrust belt with slow costly drilling and frequent plan changes after logging. The cross sections are not always accurate due to poor resolution. Therefore, the placement of the wellbore is crucial to success. This presentation showed borehole images from drilling in the Foothills. Topics that were addressed included the Foothills drilling environment; target selection; current well placement methods; and current well performance. Borehole images included resistivity images and density images. The presentation addressed why real-time images should be run. These reasons include the ability to pick dips in real-time; structural information in real time allows for better well placement; it is easier to find and stay in producing areas; reduced non-productive time and probability of sidetracks; and elimination of pipe conveys logs. Applications in the Alberta Foothills such as the commercial run for GVR4 were also offered. Among the operational issues and lessons learned, it was determined that the reservoir thickness to measurement point distance ratio is too great to avoid exiting the sweet spot and that the survey calculation error cause image offset. It was concluded that GVR is a drillers tool for well placement. figs.

  17. SU-G-JeP3-08: Robotic System for Ultrasound Tracking in Radiation Therapy

    Energy Technology Data Exchange (ETDEWEB)

    Kuhlemann, I [University of Luebeck, Luebeck (Germany); Graduate School for Computing in Medicine and Life Sciences, University of Luebeck (Germany); Jauer, P; Schweikard, A; Ernst, F [University of Luebeck, Luebeck (Germany)

    2016-06-15

    Purpose: For safe and accurate real-time tracking of tumors for IGRT using 4D ultrasound, it is necessary to make use of novel, high-end force-sensitive lightweight robots designed for human-machine interaction. Such a robot will be integrated into an existing robotized ultrasound system for non-invasive 4D live tracking, using a newly developed real-time control and communication framework. Methods: The new KUKA LWR iiwa robot is used for robotized ultrasound real-time tumor tracking. Besides more precise probe contact pressure detection, this robot provides an additional 7th link, enhancing the dexterity of the kinematic and the mounted transducer. Several integrated, certified safety features create a safe environment for the patients during treatment. However, to remotely control the robot for the ultrasound application, a real-time control and communication framework has to be developed. Based on a client/server concept, client-side control commands are received and processed by a central server unit and are implemented by a client module running directly on the robot’s controller. Several special functionalities for robotized ultrasound applications are integrated and the robot can now be used for real-time control of the image quality by adjusting the transducer position, and contact pressure. The framework was evaluated looking at overall real-time capability for communication and processing of three different standard commands. Results: Due to inherent, certified safety modules, the new robot ensures a safe environment for patients during tumor tracking. Furthermore, the developed framework shows overall real-time capability with a maximum average latency of 3.6 ms (Minimum 2.5 ms; 5000 trials). Conclusion: The novel KUKA LBR iiwa robot will advance the current robotized ultrasound tracking system with important features. With the developed framework, it is now possible to remotely control this robot and use it for robotized ultrasound tracking

  18. SU-G-JeP3-08: Robotic System for Ultrasound Tracking in Radiation Therapy

    International Nuclear Information System (INIS)

    Kuhlemann, I; Jauer, P; Schweikard, A; Ernst, F

    2016-01-01

    Purpose: For safe and accurate real-time tracking of tumors for IGRT using 4D ultrasound, it is necessary to make use of novel, high-end force-sensitive lightweight robots designed for human-machine interaction. Such a robot will be integrated into an existing robotized ultrasound system for non-invasive 4D live tracking, using a newly developed real-time control and communication framework. Methods: The new KUKA LWR iiwa robot is used for robotized ultrasound real-time tumor tracking. Besides more precise probe contact pressure detection, this robot provides an additional 7th link, enhancing the dexterity of the kinematic and the mounted transducer. Several integrated, certified safety features create a safe environment for the patients during treatment. However, to remotely control the robot for the ultrasound application, a real-time control and communication framework has to be developed. Based on a client/server concept, client-side control commands are received and processed by a central server unit and are implemented by a client module running directly on the robot’s controller. Several special functionalities for robotized ultrasound applications are integrated and the robot can now be used for real-time control of the image quality by adjusting the transducer position, and contact pressure. The framework was evaluated looking at overall real-time capability for communication and processing of three different standard commands. Results: Due to inherent, certified safety modules, the new robot ensures a safe environment for patients during tumor tracking. Furthermore, the developed framework shows overall real-time capability with a maximum average latency of 3.6 ms (Minimum 2.5 ms; 5000 trials). Conclusion: The novel KUKA LBR iiwa robot will advance the current robotized ultrasound tracking system with important features. With the developed framework, it is now possible to remotely control this robot and use it for robotized ultrasound tracking

  19. Synchronized 2D/3D optical mapping for interactive exploration and real-time visualization of multi-function neurological images.

    Science.gov (United States)

    Zhang, Qi; Alexander, Murray; Ryner, Lawrence

    2013-01-01

    Efficient software with the ability to display multiple neurological image datasets simultaneously with full real-time interactivity is critical for brain disease diagnosis and image-guided planning. In this paper, we describe the creation and function of a new comprehensive software platform that integrates novel algorithms and functions for multiple medical image visualization, processing, and manipulation. We implement an opacity-adjustment algorithm to build 2D lookup tables for multiple slice image display and fusion, which achieves a better visual result than those of using VTK-based methods. We also develop a new real-time 2D and 3D data synchronization scheme for multi-function MR volume and slice image optical mapping and rendering simultaneously through using the same adjustment operation. All these methodologies are integrated into our software framework to provide users with an efficient tool for flexibly, intuitively, and rapidly exploring and analyzing the functional and anatomical MR neurological data. Finally, we validate our new techniques and software platform with visual analysis and task-specific user studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. A real-time expert system for nuclear power plant failure diagnosis and operational guide

    International Nuclear Information System (INIS)

    Naito, N.; Sakuma, A.; Shigeno, K.; Mori, N.

    1987-01-01

    A real-time expert system (DIAREX) has been developed to diagnose plant failure and to offer a corrective operational guide for boiling water reactor (BWR) power plants. The failure diagnosis model used in DIAREX was systematically developed, based mainly on deep knowledge, to cover heuristics. Complex paradigms for knowledge representation were adopted, i.e., the process representation language and the failure propagation tree. The system is composed of a knowledge base, knowledge base editor, preprocessor, diagnosis processor, and display processor. The DIAREX simulation test has been carried out for many transient scenarios, including multiple failures, using a real-time full-scope simulator modeled after the 1100-MW(electric) BWR power plant. Test results showed that DIAREX was capable of diagnosing a plant failure quickly and of providing a corrective operational guide with a response time fast enough to offer valuable information to plant operators

  1. [Real time 3D echocardiography

    Science.gov (United States)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  2. Wetland Plant Guide for Assessing Habitat Impacts of Real-Time Salinity Management

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Nigel W.T.; Feldmann, Sara A.

    2004-10-15

    This wetland plant guide was developed to aid moist soil plant identification and to assist in the mapping of waterfowl and shorebird habitat in the Grassland Water District and surrounding wetland areas. The motivation for this habitat mapping project was a concern that real-time salinity management of wetland drainage might have long-term consequences for wildfowl habitat health--changes in wetland drawdown schedules might, over the long term, lead to increased soil salinity and other conditions unfavorable to propagation of the most desirable moist soil plants. Hence, the implementation of a program to monitor annual changes in the most common moist soil plants might serve as an index of habitat health and sustainability. Our review of the current scientific and popular literature failed to identify a good, comprehensive field guide that could be used to calibrate and verify high resolution remote sensing imagery, that we had started to use to develop maps of wetland moist soil plants in the Grassland Water District. Since completing the guide it has been used to conduct ground truthing field surveys using the California Native Plant Society methodology in 2004. Results of this survey and a previous wetland plant survey in 2003 are published in a companion LBNL publication summarizing 4 years of fieldwork to advance the science of real-time wetland salinity management.

  3. Acquisition performance of LAPAN-A3/IPB multispectral imager in real-time mode of operation

    Science.gov (United States)

    Hakim, P. R.; Permala, R.; Jayani, A. P. S.

    2018-05-01

    LAPAN-A3/IPB satellite was launched in June 2016 and its multispectral imager has been producing Indonesian coverage images. In order to improve its support for remote sensing application, the imager should produce images with high quality and quantity. To improve the quantity of LAPAN-A3/IPB multispectral image captured, image acquisition could be executed in real-time mode from LAPAN ground station in Bogor when the satellite passes west Indonesia region. This research analyses the performance of LAPAN-A3/IPB multispectral imager acquisition in real-time mode, in terms of image quality and quantity, under assumption of several on-board and ground segment limitations. Results show that with real-time operation mode, LAPAN-A3/IPB multispectral imager could produce twice as much as image coverage compare to recorded mode. However, the images produced in real-time mode will have slightly degraded quality due to image compression process involved. Based on several analyses that have been done in this research, it is recommended to use real-time acquisition mode whenever it possible, unless for some circumstances that strictly not allow any quality degradation of the images produced.

  4. Coalescence measurements for evolving foams monitored by real-time projection imaging

    International Nuclear Information System (INIS)

    Myagotin, A; Helfen, L; Baumbach, T

    2009-01-01

    Real-time radiographic projection imaging together with novel spatio-temporal image analysis is presented to be a powerful technique for the quantitative analysis of coalescence processes accompanying the generation and temporal evolution of foams and emulsions. Coalescence events can be identified as discontinuities in a spatio-temporal image representing a sequence of projection images. Detection, identification of intensity and localization of the discontinuities exploit a violation criterion of the Fourier shift theorem and are based on recursive spatio-temporal image partitioning. The proposed method is suited for automated measurements of discontinuity rates (i.e., discontinuity intensity per unit time), so that large series of radiographs can be analyzed without user intervention. The application potential is demonstrated by the quantification of coalescence during the formation and decay of metal foams monitored by real-time x-ray radiography

  5. Human Exploration using Real-Time Robotic Operations (HERRO): A space exploration strategy for the 21st century

    Science.gov (United States)

    Schmidt, George R.; Landis, Geoffrey A.; Oleson, Steven R.

    2012-11-01

    This paper presents an exploration strategy for human missions beyond Low Earth Orbit (LEO) and the Moon that combines the best features of human and robotic spaceflight. This "Human Exploration using Real-time Robotic Operations" (HERRO) strategy refrains from placing humans on the surfaces of the Moon and Mars in the near-term. Rather, it focuses on sending piloted spacecraft and crews into orbit around Mars and other exploration targets of interest, and conducting astronaut exploration of the surfaces using telerobots and remotely-controlled systems. By eliminating the significant communications delay or "latency" with Earth due to the speed of light limit, teleoperation provides scientists real-time control of rovers and other sophisticated instruments. This in effect gives them a "virtual presence" on planetary surfaces, and thus expands the scientific return at these destinations. HERRO mitigates several of the major issues that have hindered the progress of human spaceflight beyond Low Earth Orbit (LEO) by: (1) broadening the range of destinations for near-term human missions; (2) reducing cost and risk through less complexity and fewer man-rated elements; (3) offering benefits of human-equivalent in-situ cognition, decision-making and field-work on planetary bodies; (4) providing a simpler approach to returning samples from Mars and planetary surfaces; and (5) facilitating opportunities for international collaboration through contribution of diverse robotic systems. HERRO provides a firm justification for human spaceflight—one that expands the near-term capabilities of scientific exploration while providing the space transportation infrastructure needed for eventual human landings in the future.

  6. Design, implementation and evaluation of an independent real-time safety layer for medical robotic systems using a force-torque-acceleration (FTA) sensor.

    Science.gov (United States)

    Richter, Lars; Bruder, Ralf

    2013-05-01

    Most medical robotic systems require direct interaction or contact with the robot. Force-Torque (FT) sensors can easily be mounted to the robot to control the contact pressure. However, evaluation is often done in software, which leads to latencies. To overcome that, we developed an independent safety system, named FTA sensor, which is based on an FT sensor and an accelerometer. An embedded system (ES) runs a real-time monitoring system for continuously checking of the readings. In case of a collision or error, it instantaneously stops the robot via the robot's external emergency stop. We found that the ES implementing the FTA sensor has a maximum latency of [Formula: see text] ms to trigger the robot's emergency stop. For the standard settings in the application of robotized transcranial magnetic stimulation, the robot will stop after at most 4 mm. Therefore, it works as an independent safety layer preventing patient and/or operator from serious harm.

  7. Operation and force analysis of the guide wire in a minimally invasive vascular interventional surgery robot system

    Science.gov (United States)

    Yang, Xue; Wang, Hongbo; Sun, Li; Yu, Hongnian

    2015-03-01

    To develop a robot system for minimally invasive surgery is significant, however the existing minimally invasive surgery robots are not applicable in practical operations, due to their limited functioning and weaker perception. A novel wire feeder is proposed for minimally invasive vascular interventional surgery. It is used for assisting surgeons in delivering a guide wire, balloon and stenting into a specific lesion location. By contrasting those existing wire feeders, the motion methods for delivering and rotating the guide wire in blood vessel are described, and their mechanical realization is presented. A new resistant force detecting method is given in details. The change of the resistance force can help the operator feel the block or embolism existing in front of the guide wire. The driving torque for rotating the guide wire is developed at different positions. Using the CT reconstruction image and extracted vessel paths, the path equation of the blood vessel is obtained. Combining the shapes of the guide wire outside the blood vessel, the whole bending equation of the guide wire is obtained. That is a risk criterion in the delivering process. This process can make operations safer and man-machine interaction more reliable. A novel surgery robot for feeding guide wire is designed, and a risk criterion for the system is given.

  8. MonoSLAM: real-time single camera SLAM.

    Science.gov (United States)

    Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier

    2007-06-01

    We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.

  9. Light-driven micro-robotics with holographic 3D tracking

    DEFF Research Database (Denmark)

    Glückstad, Jesper

    2016-01-01

    We recently pioneered the concept of ligh-driven micro-robotics including the new and disruptive 3D-printed micro-tools coined Wave-guided Optical Waveguides that can be real-time optically trapped and “remote-controlled” in a volume with six-degrees-of-freedom. To be exploring the full potential...... of this new drone-like 3D light robotics approach in challenging microscopic geometries requires a versatile and real-time reconfigurable light coupling that can dynamically track a plurality of “light robots” in 3D to ensure continuous optimal light coupling on the fly. Our latest developments in this new...

  10. Fiber-based real-time color digital in-line holography.

    Science.gov (United States)

    Kowalczyk, Adam; Bieda, Marcin; Makowski, Michal; Sypek, Maciej; Kolodziejczyk, Andrzej

    2013-07-01

    An extremely simple setup for real-time color digital holography using single-mode fibers as light guides and a directional coupler as a beam-splitting device is presented. With the directional coupler we have two object beams and one residual crosstalk used as a reference beam. This facilitates the adjustment and improves robustness. With the use of graphics processing units, real-time hologram reconstruction was possible. Due to adaptation of the optical setup and scaling, zero-order and complex image influence is highly reduced.

  11. AAPM and GEC-ESTRO guidelines for image-guided robotic brachytherapy: Report of Task Group 192

    Energy Technology Data Exchange (ETDEWEB)

    Podder, Tarun K., E-mail: tarun.podder@uhhospitals.org [Department of Radiation Oncology, University Hospitals, Case Western Reserve University, Cleveland, Ohio 44122 (United States); Beaulieu, Luc [Department of Radiation Oncology, Centre Hospitalier Univ de Quebec, Quebec G1R 2J6 (Canada); Caldwell, Barrett [Schools of Industrial Engineering and Aeronautics and Astronautics, Purdue University, West Lafayette, Indiana 47907 (United States); Cormack, Robert A. [Department of Radiation Oncology, Harvard Medical School, Boston, Massachusetts 02115 (United States); Crass, Jostin B. [Department of Radiation Oncology, Vanderbilt University, Nashville, Tennessee 37232 (United States); Dicker, Adam P.; Yu, Yan [Department of Radiation Oncology, Thomas Jefferson University, Philadelphia, Pennsylvania 19107 (United States); Fenster, Aaron [Department of Imaging Research, Robarts Research Institute, London, Ontario N6A 5K8 (Canada); Fichtinger, Gabor [School of Computer Science, Queen’s University, Kingston, Ontario K7L 3N6 (Canada); Meltsner, Michael A. [Philips Radiation Oncology Systems, Fitchburg, Wisconsin 53711 (United States); Moerland, Marinus A. [Department of Radiotherapy, University Medical Center Utrecht, Utrecht, 3508 GA (Netherlands); Nath, Ravinder [Department of Therapeutic Radiology, Yale University School of Medicine, New Haven, Connecticut 06520 (United States); Rivard, Mark J. [Department of Radiation Oncology, Tufts University School of Medicine, Boston, Massachusetts 02111 (United States); Salcudean, Tim [Department of Electrical and Computer Engineering, University of British Columbia, Vancouver, British Columbia V6T 1Z4 (Canada); Song, Danny Y. [Department of Radiation Oncology, Johns Hopkins University School of Medicine, Baltimore, Maryland 21231 (United States); Thomadsen, Bruce R. [Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705 (United States)

    2014-10-15

    In the last decade, there have been significant developments into integration of robots and automation tools with brachytherapy delivery systems. These systems aim to improve the current paradigm by executing higher precision and accuracy in seed placement, improving calculation of optimal seed locations, minimizing surgical trauma, and reducing radiation exposure to medical staff. Most of the applications of this technology have been in the implantation of seeds in patients with early-stage prostate cancer. Nevertheless, the techniques apply to any clinical site where interstitial brachytherapy is appropriate. In consideration of the rapid developments in this area, the American Association of Physicists in Medicine (AAPM) commissioned Task Group 192 to review the state-of-the-art in the field of robotic interstitial brachytherapy. This is a joint Task Group with the Groupe Européen de Curiethérapie-European Society for Radiotherapy and Oncology (GEC-ESTRO). All developed and reported robotic brachytherapy systems were reviewed. Commissioning and quality assurance procedures for the safe and consistent use of these systems are also provided. Manual seed placement techniques with a rigid template have an estimated in vivo accuracy of 3–6 mm. In addition to the placement accuracy, factors such as tissue deformation, needle deviation, and edema may result in a delivered dose distribution that differs from the preimplant or intraoperative plan. However, real-time needle tracking and seed identification for dynamic updating of dosimetry may improve the quality of seed implantation. The AAPM and GEC-ESTRO recommend that robotic systems should demonstrate a spatial accuracy of seed placement ≤1.0 mm in a phantom. This recommendation is based on the current performance of existing robotic brachytherapy systems and propagation of uncertainties. During clinical commissioning, tests should be conducted to ensure that this level of accuracy is achieved. These tests

  12. AAPM and GEC-ESTRO guidelines for image-guided robotic brachytherapy: Report of Task Group 192

    International Nuclear Information System (INIS)

    Podder, Tarun K.; Beaulieu, Luc; Caldwell, Barrett; Cormack, Robert A.; Crass, Jostin B.; Dicker, Adam P.; Yu, Yan; Fenster, Aaron; Fichtinger, Gabor; Meltsner, Michael A.; Moerland, Marinus A.; Nath, Ravinder; Rivard, Mark J.; Salcudean, Tim; Song, Danny Y.; Thomadsen, Bruce R.

    2014-01-01

    In the last decade, there have been significant developments into integration of robots and automation tools with brachytherapy delivery systems. These systems aim to improve the current paradigm by executing higher precision and accuracy in seed placement, improving calculation of optimal seed locations, minimizing surgical trauma, and reducing radiation exposure to medical staff. Most of the applications of this technology have been in the implantation of seeds in patients with early-stage prostate cancer. Nevertheless, the techniques apply to any clinical site where interstitial brachytherapy is appropriate. In consideration of the rapid developments in this area, the American Association of Physicists in Medicine (AAPM) commissioned Task Group 192 to review the state-of-the-art in the field of robotic interstitial brachytherapy. This is a joint Task Group with the Groupe Européen de Curiethérapie-European Society for Radiotherapy and Oncology (GEC-ESTRO). All developed and reported robotic brachytherapy systems were reviewed. Commissioning and quality assurance procedures for the safe and consistent use of these systems are also provided. Manual seed placement techniques with a rigid template have an estimated in vivo accuracy of 3–6 mm. In addition to the placement accuracy, factors such as tissue deformation, needle deviation, and edema may result in a delivered dose distribution that differs from the preimplant or intraoperative plan. However, real-time needle tracking and seed identification for dynamic updating of dosimetry may improve the quality of seed implantation. The AAPM and GEC-ESTRO recommend that robotic systems should demonstrate a spatial accuracy of seed placement ≤1.0 mm in a phantom. This recommendation is based on the current performance of existing robotic brachytherapy systems and propagation of uncertainties. During clinical commissioning, tests should be conducted to ensure that this level of accuracy is achieved. These tests

  13. New real-time image processing system for IRFPA

    Institute of Scientific and Technical Information of China (English)

    WANG Bing-jian; LIU Shang-qian; CHENG Yu-bao

    2006-01-01

    Influenced by detectors' material,manufacturing technology etc,every detector in infrared focal plane array (IRFPA) will output different voltages even if their input radiation flux is the same.And this is called non-uniformity of IRFPA.At the same time,the high background temperature,low temperature difference between targets and background and the low responsivity of IRFPA result in low contrast of infrared images.So non-uniformity correction and image enhancement are important techniques for IRFPA imaging system.This paper proposes a new real-time infrared image processing system based on Field Programmable Gate Array(FPGA).The system implements non-uniformity correction,image enhancement and video synthesization etc.By using parallel architecture and pipeline technique,the system processing speed is as high as 50Mx12bits per second.It is appropriate greatly to a large IRFPA and a high frame frequency IRFPA imaging system.The system is miniatured in one FPGA.

  14. Real time 2 dimensional detector for charged particle and soft X-ray images

    International Nuclear Information System (INIS)

    Ishikawa, M.; Ito, M.; Endo, T.; Oba, K.

    1995-01-01

    The conventional instruments used in experiments for the soft X-ray region such as X-ray diffraction analysis are X-ray films or imaging plates. However, these instruments are not suitable for real time observation. In this paper, newly developed imaging devices will be presented, which have the capability to take X-ray images in real time with a high detection efficiency. Also, another capability, to take elementary particle tracking images, is described. (orig.)

  15. Real-time histology in liver disease using multiphoton microscopy with fluorescence lifetime imaging

    OpenAIRE

    Wang, Haolu; Liang, Xiaowen; Mohammed, Yousuf H.; Thomas, James A.; Bridle, Kim R.; Thorling, Camilla A.; Grice, Jeffrey E.; Xu, Zhi Ping; Liu, Xin; Crawford, Darrell H. G.; Roberts, Michael S.

    2015-01-01

    Conventional histology with light microscopy is essential in the diagnosis of most liver diseases. Recently, a concept of real-time histology with optical biopsy has been advocated. In this study, live mice livers (normal, with fibrosis, steatosis, hepatocellular carcinoma and ischemia-reperfusion injury) were imaged by MPM-FLIM for stain-free real-time histology. The acquired MPM-FLIM images were compared with conventional histological images. MPM-FLIM imaged subsurface cellular and subcellu...

  16. Store-Carry and Forward-Type M2M Communication Protocol Enabling Guide Robots to Work together and the Method of Identifying Malfunctioning Robots Using the Byzantine Algorithm

    Directory of Open Access Journals (Sweden)

    Yoshio Suga

    2016-11-01

    Full Text Available This paper concerns a service in which multiple guide robots in an area display arrows to guide individual users to their destinations. It proposes a method of identifying malfunctioning robots and robots that give wrong directions to users. In this method, users’ mobile terminals and robots form a store-carry and forward-type M2M communication network, and a distributed cooperative protocol is used to enable robots to share information and identify malfunctioning robots using the Byzantine algorithm. The robots do not directly communicate with each other, but through users’ mobile terminals. We have introduced the concept of the quasi-synchronous number, so whether a certain robot is malfunctioning can be determined even when items of information held by all of the robots are not synchronized. Using simulation, we have evaluated the proposed method in terms of the rate of identifying malfunctioning robots, the rate of reaching the destination and the average length of time to reach the destination.

  17. Image-guided surgery and therapy: current status and future directions

    Science.gov (United States)

    Peters, Terence M.

    2001-05-01

    Image-guided surgery and therapy is assuming an increasingly important role, particularly considering the current emphasis on minimally-invasive surgical procedures. Volumetric CT and MR images have been used now for some time in conjunction with stereotactic frames, to guide many neurosurgical procedures. With the development of systems that permit surgical instruments to be tracked in space, image-guided surgery now includes the use of frame-less procedures, and the application of the technology has spread beyond neurosurgery to include orthopedic applications and therapy of various soft-tissue organs such as the breast, prostate and heart. Since tracking systems allow image- guided surgery to be undertaken without frames, a great deal of effort has been spent on image-to-image and image-to- patient registration techniques, and upon the means of combining real-time intra-operative images with images acquired pre-operatively. As image-guided surgery systems have become increasingly sophisticated, the greatest challenges to their successful adoption in the operating room of the future relate to the interface between the user and the system. To date, little effort has been expended to ensure that the human factors issues relating to the use of such equipment in the operating room have been adequately addressed. Such systems will only be employed routinely in the OR when they are designed to be intuitive, unobtrusive, and provide simple access to the source of the images.

  18. Feasibility of MR-guided angioplasty of femoral artery stenoses using real-time imaging and intraarterial contrast-enhanced MR angiography

    International Nuclear Information System (INIS)

    Paetzel, C.; Zorger, N.; Bachthaler, M.; Voelk, M.; Seitz, J.; Herold, T.; Feuerbach, S.; Lenhart, M.; Nitz, W.R.

    2004-01-01

    Purpose: To show the feasibility of magnetic resonance (MR) for guided interventional therapy of femoral and popliteal artery stenoses with commercially available materials supported by MR real-time imaging and intraarterial MR angiography. Materials and Methods: Three patients (1 female, 2 male), suffering from symptomatic arterial occlusive disease with stenoses of the femoral (n=2) or popliteal (n=1) arteries were included. Intraarterial digital subtraction angiography was performed in each patient pre- and post-interventionally as standard of reference to quantify stenoses. The degree of the stenoses reached from 71-88%. The MR images were acquired on a 1.5 T MR scanner (Magnetom Sonata; Siemens, Erlangen, Germany). For MR-angiography, a Flash 3D sequence was utilized following injection of 5 mL diluted gadodiamide (Omniscan; Amersham Buchler, Braunschweig, Germany) via the arterial access. Two maximum intensity projections (MIP) were used as road maps and localizer for the interactive positioning of a continuously running 2D-FLASH sequence with a temporal solution of 2 images per second. During the intervention, an MR compatible monitor provided the image display inside the scanner room. Safety guidelines were followed during imaging in the presence of a conductive guidewire. The lesion was crossed by a commercially available balloon catheter (Wanda, Boston Scientific; Ratingen, Germany), which was mounted on a 0.035'' guidewire (Terumo; Leuven, Belgium). The visibility was provided by radiopaque markers embedded in the balloon and was improved by injection of 1 mL gadodiamide into the balloon. After dilation, the result was checked by intraarterial MR angiography and catheter angiography. Results: The stenoses could be correctly localized by intraarterial MR angiography. There was complete correlation between intraarterial MR angiography and digital subtraction angiography. The combination of guidewire and balloon was visible and the balloon was placed

  19. The development of remote controlled linear guide and mast vertical guide of repair robot for RV head CRDM nozzle region in NPP

    International Nuclear Information System (INIS)

    Kim, Seung Ho; Seo, Yong Chil; Shin, Ho Cheol; Lee, Sung Uk; Jung, Kyung Min

    2006-11-01

    Reactor vessel which is core instrument in NPP must maintains integrity in the high temperature, high pressure and high radiation environment. Therefore RV must be inspected periodically. If there is defect, the RV must be repaired. A remote controlled linear guide and a vertical guide were developed for a welding repair robot of the RV head CRDM nozzle region. During inspection/maintenance, the RV head is placed RV head storage which is a double circled concrete structure. A linear guide was developed to provide a linear motion to the repair robot, which locates the robot under the RV head. The linear guide also provides a strong support to the robot not to overturn when the robot repairs the RV head. The robot needs lifting about 2m to reach the CRDM nozzle, therefore a vertical guide was developed. For easy traveling, the linear guide is designed 4 parts and the vertical guide is designed 3 parts. A control system was developed to remotely control the guide system which is composed of a connecter box, cables, control box, a computer and a control program. A monitoring system was developed to monitor operation of the guide system

  20. The development of remote controlled linear guide and mast vertical guide of repair robot for RV head CRDM nozzle region in NPP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Ho; Seo, Yong Chil; Shin, Ho Cheol; Lee, Sung Uk; Jung, Kyung Min

    2006-11-15

    Reactor vessel which is core instrument in NPP must maintains integrity in the high temperature, high pressure and high radiation environment. Therefore RV must be inspected periodically. If there is defect, the RV must be repaired. A remote controlled linear guide and a vertical guide were developed for a welding repair robot of the RV head CRDM nozzle region. During inspection/maintenance, the RV head is placed RV head storage which is a double circled concrete structure. A linear guide was developed to provide a linear motion to the repair robot, which locates the robot under the RV head. The linear guide also provides a strong support to the robot not to overturn when the robot repairs the RV head. The robot needs lifting about 2m to reach the CRDM nozzle, therefore a vertical guide was developed. For easy traveling, the linear guide is designed 4 parts and the vertical guide is designed 3 parts. A control system was developed to remotely control the guide system which is composed of a connecter box, cables, control box, a computer and a control program. A monitoring system was developed to monitor operation of the guide system.

  1. Thermal Imaging Systems for Real-Time Applications in Smart Cities

    DEFF Research Database (Denmark)

    Gade, Rikke; Moeslund, Thomas B.; Nielsen, Søren Zebitz

    2016-01-01

    of thermal imaging in real-time Smart City applications. Thermal cameras operate independently of light and measure the radiated infrared waves representing the temperature of the scene. In order to showcase the possibilities, we present five different applications which use thermal imaging only...

  2. A real time S-parameter imaging system

    International Nuclear Information System (INIS)

    Naik, P.S.; Cheung, C.K.; Beling, C.D.; Fung, S.

    2005-01-01

    Obtaining a lateral S-parameter image scan from positrons implanted into semiconductor devices can be a helpful research tool both for localizing device structures and in diagnosing defect patterns that could help interpret function. S-parameter images can be obtained by electromagnetically rastering a variable energy positron beam of small spot size across the sample. Here we describe a general hardware and software architecture of relatively low cost that has recently been developed in our laboratory which allows the whole sub-surface S-parameter image of a sample or device to be obtained in real time. This system has the advantage over more conventional sequential scanning techniques of allowing the operator to terminate data collection once the quality of the image is deemed sufficient. As an example of the usefulness of this type of imaging architecture, S-parameter images of a representative sample are presented at two different position implantation energies. (author)

  3. Remote-controlled vision-guided mobile robot system

    Science.gov (United States)

    Ande, Raymond; Samu, Tayib; Hall, Ernest L.

    1997-09-01

    Automated guided vehicles (AGVs) have many potential applications in manufacturing, medicine, space and defense. The purpose of this paper is to describe exploratory research on the design of the remote controlled emergency stop and vision systems for an autonomous mobile robot. The remote control provides human supervision and emergency stop capabilities for the autonomous vehicle. The vision guidance provides automatic operation. A mobile robot test-bed has been constructed using a golf cart base. The mobile robot (Bearcat) was built for the Association for Unmanned Vehicle Systems (AUVS) 1997 competition. The mobile robot has full speed control with guidance provided by a vision system and an obstacle avoidance system using ultrasonic sensors systems. Vision guidance is accomplished using two CCD cameras with zoom lenses. The vision data is processed by a high speed tracking device, communicating with the computer the X, Y coordinates of blobs along the lane markers. The system also has three emergency stop switches and a remote controlled emergency stop switch that can disable the traction motor and set the brake. Testing of these systems has been done in the lab as well as on an outside test track with positive results that show that at five mph the vehicle can follow a line and at the same time avoid obstacles.

  4. Markerless Kinect-Based Hand Tracking for Robot Teleoperation

    Directory of Open Access Journals (Sweden)

    Guanglong Du

    2012-07-01

    Full Text Available This paper presents a real-time remote robot teleoperation method using markerless Kinect-based hand tracking. Using this tracking algorithm, the positions of index finger and thumb in 3D can be estimated by processing depth images from Kinect. The hand pose is used as a model to specify the pose of a real-time remote robot's end-effector. This method provides a way to send a whole task to a remote robot instead of sending limited motion commands like gesture-based approaches and this method has been tested in pick-and-place tasks.

  5. Ultrasound contrast agent imaging: Real-time imaging of the superharmonics

    Energy Technology Data Exchange (ETDEWEB)

    Peruzzini, D.; Viti, J. [MSD lab, Department of Information Engineering, Univ of Florence, Via S.Marta, 3, 50139 Firenze (Italy); Erasmus MC, ’s-Gravendijkwal 230, Faculty Building, Ee 2302, 3015 CE Rotterdam (Netherlands); Tortoli, P. [MSD lab, Department of Information Engineering, Univ of Florence, Via S.Marta, 3, 50139 Firenze (Italy); Verweij, M. D. [Acoustical Wavefield Imaging, ImPhys, Delft Univ Technology, van der Waalsweg 8, 2628 CH Delft (Netherlands); Jong, N. de; Vos, H. J., E-mail: h.vos@erasmusmc.nl [Erasmus MC, ’s-Gravendijkwal 230, Faculty Building, Ee 2302, 3015 CE Rotterdam (Netherlands); Acoustical Wavefield Imaging, ImPhys, Delft Univ Technology, van der Waalsweg 8, 2628 CH Delft (Netherlands)

    2015-10-28

    Currently, in medical ultrasound contrast agent (UCA) imaging the second harmonic scattering of the microbubbles is regularly used. This scattering is in competition with the signal that is caused by nonlinear wave propagation in tissue. It was reported that UCA imaging based on the third or higher harmonics, i.e. “superharmonic” imaging, shows better contrast. However, the superharmonic scattering has a lower signal level compared to e.g. second harmonic signals. This study investigates the contrast-to-tissue ratio (CTR) and signal to noise ratio (SNR) of superharmonic UCA scattering in a tissue/vessel mimicking phantom using a real-time clinical scanner. Numerical simulations were performed to estimate the level of harmonics generated by the microbubbles. Data were acquired with a custom built dual-frequency cardiac phased array probe. Fundamental real-time images were produced while beam formed radiofrequency (RF) data was stored for further offline processing. The phantom consisted of a cavity filled with UCA surrounded by tissue mimicking material. The acoustic pressure in the cavity of the phantom was 110 kPa (MI = 0.11) ensuring non-destructivity of UCA. After processing of the acquired data from the phantom, the UCA-filled cavity could be clearly observed in the images, while tissue signals were suppressed at or below the noise floor. The measured CTR values were 36 dB, >38 dB, and >32 dB, for the second, third, and fourth harmonic respectively, which were in agreement with those reported earlier for preliminary contrast superharmonic imaging. The single frame SNR values (in which ‘signal’ denotes the signal level from the UCA area) were 23 dB, 18 dB, and 11 dB, respectively. This indicates that noise, and not the tissue signal, is the limiting factor for the UCA detection when using the superharmonics in nondestructive mode.

  6. Real-time shadows

    CERN Document Server

    Eisemann, Elmar; Assarsson, Ulf; Wimmer, Michael

    2011-01-01

    Important elements of games, movies, and other computer-generated content, shadows are crucial for enhancing realism and providing important visual cues. In recent years, there have been notable improvements in visual quality and speed, making high-quality realistic real-time shadows a reachable goal. Real-Time Shadows is a comprehensive guide to the theory and practice of real-time shadow techniques. It covers a large variety of different effects, including hard, soft, volumetric, and semi-transparent shadows.The book explains the basics as well as many advanced aspects related to the domain

  7. SU-E-T-453: A Novel Daily QA System for Robotic Image Guided Radiosurgery with Variable Aperture Collimator

    International Nuclear Information System (INIS)

    Wang, L; Nelson, B

    2014-01-01

    Purpose: A novel end-to-end system using a CCD camera and a scintillator based phantom that is capable of measuring the beam-by-beam delivery accuracy of Robotic Radiosurgery has been developed and reported in our previous work. This work investigates its application to end-to-end type daily QA for Robotic Radiosurgery (Cyberknife) with Variable Aperture Collimator (Iris). Methods: The phantom was first scanned with a CT scanner at 0.625 slice thickness and exported to the Cyberknife Muliplan (v4.6) treatment planning system. An isocentric treatment plan was created consisting of ten beams of 25 Monitor Units each using Iris apertures of 7.5, 10, 15, 20, and 25 mm. The plan was delivered six times in two days on the Cyberknife G4 system with fiducial tracking on the four metal fiducials embedded in phantom with re-positioning between the measurements. The beam vectors (X, Y, Z) are measured and compared with the plan from the machine delivery file (XML file). The Iris apertures (FWHM) were measured from the beam flux map and compared with the commissioning data. Results: The average beam positioning accuracies of the six deliveries are 0.71 ± 0.40 mm, 0.72 ± 0.44 mm, 0.74 ± 0.42 mm, 0.70 ± 0.40 mm, 0.79 ± 0.44 mm and 0.69 ± 0.41 mm respectively. Radiation beam width (FWHM) variations are within ±0.05 mm, and they agree with the commissioning data within 0.22 mm. The delivery time for the plan is about 7 minutes and the results are given instantly. Conclusion: The experimental results agree with stated sub-millimeter delivery accuracy of Cyberknife system. Beam FWHM variations comply with the 0.2 mm accuracy of the Iris collimator at SAD. The XRV-100 system has proven to be a powerful tool in performing end-to-end type tests for Robotic Image Guided Radiosurgery Daily QA

  8. Image-guided focused ultrasound ablation of breast cancer: current status, challenges, and future directions

    NARCIS (Netherlands)

    Schmitz, A.C.; Gianfelice, D.; Daniel, B.L.; Mali, W.P.T.M.; Bosch, M.A.A.J. van den

    2008-01-01

    Image-guided focussed ultrasound (FUS) ablation is a noninvasive procedure that has been used for treatment of benign or malignant breast tumours. Image-guidance during ablation is achieved either by using real-time ultrasound (US) or magnetic resonance imaging (MRI). The past decade phase I

  9. A Haptic Guided Robotic System for Endoscope Positioning and Holding.

    Science.gov (United States)

    Cabuk, Burak; Ceylan, Savas; Anik, Ihsan; Tugasaygi, Mehtap; Kizir, Selcuk

    2015-01-01

    To determine the feasibility, advantages, and disadvantages of using a robot for holding and maneuvering the endoscope in transnasal transsphenoidal surgery. The system used in this study was a Stewart Platform based robotic system that was developed by Kocaeli University Department of Mechatronics Engineering for positioning and holding of endoscope. After the first use on an artificial head model, the system was used on six fresh postmortem bodies that were provided by the Morgue Specialization Department of the Forensic Medicine Institute (Istanbul, Turkey). The setup required for robotic system was easy, the time for registration procedure and setup of the robot takes 15 minutes. The resistance was felt on haptic arm in case of contact or friction with adjacent tissues. The adaptation process was shorter with the mouse to manipulate the endoscope. The endoscopic transsphenoidal approach was achieved with the robotic system. The endoscope was guided to the sphenoid ostium with the help of the robotic arm. This robotic system can be used in endoscopic transsphenoidal surgery as an endoscope positioner and holder. The robot is able to change the position easily with the help of an assistant and prevents tremor, and provides a better field of vision for work.

  10. Real-time prediction of hand trajectory by ensembles of cortical neurons in primates

    Science.gov (United States)

    Wessberg, Johan; Stambaugh, Christopher R.; Kralik, Jerald D.; Beck, Pamela D.; Laubach, Mark; Chapin, John K.; Kim, Jung; Biggs, S. James; Srinivasan, Mandayam A.; Nicolelis, Miguel A. L.

    2000-11-01

    Signals derived from the rat motor cortex can be used for controlling one-dimensional movements of a robot arm. It remains unknown, however, whether real-time processing of cortical signals can be employed to reproduce, in a robotic device, the kind of complex arm movements used by primates to reach objects in space. Here we recorded the simultaneous activity of large populations of neurons, distributed in the premotor, primary motor and posterior parietal cortical areas, as non-human primates performed two distinct motor tasks. Accurate real-time predictions of one- and three-dimensional arm movement trajectories were obtained by applying both linear and nonlinear algorithms to cortical neuronal ensemble activity recorded from each animal. In addition, cortically derived signals were successfully used for real-time control of robotic devices, both locally and through the Internet. These results suggest that long-term control of complex prosthetic robot arm movements can be achieved by simple real-time transformations of neuronal population signals derived from multiple cortical areas in primates.

  11. Performance enhancement of various real-time image processing techniques via speculative execution

    Science.gov (United States)

    Younis, Mohamed F.; Sinha, Purnendu; Marlowe, Thomas J.; Stoyenko, Alexander D.

    1996-03-01

    In real-time image processing, an application must satisfy a set of timing constraints while ensuring the semantic correctness of the system. Because of the natural structure of digital data, pure data and task parallelism have been used extensively in real-time image processing to accelerate the handling time of image data. These types of parallelism are based on splitting the execution load performed by a single processor across multiple nodes. However, execution of all parallel threads is mandatory for correctness of the algorithm. On the other hand, speculative execution is an optimistic execution of part(s) of the program based on assumptions on program control flow or variable values. Rollback may be required if the assumptions turn out to be invalid. Speculative execution can enhance average, and sometimes worst-case, execution time. In this paper, we target various image processing techniques to investigate applicability of speculative execution. We identify opportunities for safe and profitable speculative execution in image compression, edge detection, morphological filters, and blob recognition.

  12. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging.

    Science.gov (United States)

    Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L

    2008-12-21

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from approximately 0.5 mm for the normal adult breathing pattern to approximately 1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general

  13. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging

    International Nuclear Information System (INIS)

    Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L

    2008-01-01

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from ∼0.5 mm for the normal adult breathing pattern to ∼1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real-time

  14. Real-time Image Generation for Compressive Light Field Displays

    International Nuclear Information System (INIS)

    Wetzstein, G; Lanman, D; Hirsch, M; Raskar, R

    2013-01-01

    With the invention of integral imaging and parallax barriers in the beginning of the 20th century, glasses-free 3D displays have become feasible. Only today—more than a century later—glasses-free 3D displays are finally emerging in the consumer market. The technologies being employed in current-generation devices, however, are fundamentally the same as what was invented 100 years ago. With rapid advances in optical fabrication, digital processing power, and computational perception, a new generation of display technology is emerging: compressive displays exploring the co-design of optical elements and computational processing while taking particular characteristics of the human visual system into account. In this paper, we discuss real-time implementation strategies for emerging compressive light field displays. We consider displays composed of multiple stacked layers of light-attenuating or polarization-rotating layers, such as LCDs. The involved image generation requires iterative tomographic image synthesis. We demonstrate that, for the case of light field display, computed tomographic light field synthesis maps well to operations included in the standard graphics pipeline, facilitating efficient GPU-based implementations with real-time framerates.

  15. Advances in Real-Time Systems

    CERN Document Server

    Chakraborty, Samarjit

    2012-01-01

    This volume contains the lectures given in honor to Georg Farber as tribute to his contributions in the area of real-time and embedded systems. The chapters of many leading scientists cover a wide range of aspects, like robot or automotive vision systems or medical aspects.

  16. Impact of orthodontic appliances on the quality of craniofacial anatomical magnetic resonance imaging and real-time speech imaging.

    Science.gov (United States)

    Wylezinska, Marzena; Pinkstone, Marie; Hay, Norman; Scott, Andrew D; Birch, Malcolm J; Miquel, Marc E

    2015-12-01

    The aim of this work was to investigate the effects of commonly used orthodontic appliances on the magnetic resonance (MR) image quality of the craniofacial region, with special interest in the soft palate and velopharyngeal wall using real-time speech imaging sequences and anatomical imaging of the temporomandibular joints (TMJ) and pituitaries. Common orthodontic appliances were studied on 1.5 T scanner using standard spin and gradient echo sequences (based on the American Society for Testing and Materials standard test method) and sequences previously applied for high-resolution anatomical and dynamic real-time imaging during speech. Images were evaluated for the presence and size of artefacts. Metallic orthodontic appliances had different effects on image quality. The most extensive individual effects were associated with the presence of stainless steel archwire, particularly if combined with stainless steel brackets and stainless steel molar bands. With those appliances, diagnostic quality of magnetic resonance imaging speech and palate images will be most likely severely degraded, or speech imaging and imaging of pituitaries and TMJ will be not possible. All non-metallic, non-metallic with Ni/Cr reinforcement or Ni/Ti alloys appliances were of little concern. The results in the study are only valid at 1.5 T and for the sequences and devices used and cannot necessarily be extrapolated to all sequences and devices. Furthermore, both geometry and size of some appliances are subject dependent, and consequently, the effects on the image quality can vary between subjects. Therefore, the results presented in this article should be treated as a guide when assessing the risks of image quality degradation rather than an absolute evaluation of possible artefacts. Appliances manufactured from stainless steel cause extensive artefacts, which may render image non-diagnostic. The presence and type of orthodontic appliances should be always included in the patient

  17. Real time 3D structural and Doppler OCT imaging on graphics processing units

    Science.gov (United States)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2013-03-01

    In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.

  18. Retrospective Reconstruction of High Temporal Resolution Cine Images from Real-Time MRI using Iterative Motion Correction

    DEFF Research Database (Denmark)

    Hansen, Michael Schacht; Sørensen, Thomas Sangild; Arai, Andrew

    2012-01-01

    acquisitions in 10 (N = 10) subjects. Acceptable image quality was obtained in all motion-corrected reconstructions, and the resulting mean image quality score was (a) Cartesian real-time: 2.48, (b) Golden Angle real-time: 1.90 (1.00–2.50), (c) Cartesian motion correction: 3.92, (d) Radial motion correction: 4...... and motion correction based on nonrigid registration and can be applied to arbitrary k-space trajectories. The method is demonstrated with real-time Cartesian imaging and Golden Angle radial acquisitions, and the motion-corrected acquisitions are compared with raw real-time images and breath-hold cine...

  19. MO-FG-BRD-00: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.

  20. MO-FG-BRD-00: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management

    International Nuclear Information System (INIS)

    2015-01-01

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow

  1. Real-time radiography

    International Nuclear Information System (INIS)

    Bossi, R.H.; Oien, C.T.

    1981-01-01

    Real-time radiography is used for imaging both dynamic events and static objects. Fluorescent screens play an important role in converting radiation to light, which is then observed directly or intensified and detected. The radiographic parameters for real-time radiography are similar to conventional film radiography with special emphasis on statistics and magnification. Direct-viewing fluoroscopy uses the human eye as a detector of fluorescent screen light or the light from an intensifier. Remote-viewing systems replace the human observer with a television camera. The remote-viewing systems have many advantages over the direct-viewing conditions such as safety, image enhancement, and the capability to produce permanent records. This report reviews real-time imaging system parameters and components

  2. Forming Human-Robot Teams Across Time and Space

    Science.gov (United States)

    Hambuchen, Kimberly; Burridge, Robert R.; Ambrose, Robert O.; Bluethmann, William J.; Diftler, Myron A.; Radford, Nicolaus A.

    2012-01-01

    NASA pushes telerobotics to distances that span the Solar System. At this scale, time of flight for communication is limited by the speed of light, inducing long time delays, narrow bandwidth and the real risk of data disruption. NASA also supports missions where humans are in direct contact with robots during extravehicular activity (EVA), giving a range of zero to hundreds of millions of miles for NASA s definition of "tele". . Another temporal variable is mission phasing. NASA missions are now being considered that combine early robotic phases with later human arrival, then transition back to robot only operations. Robots can preposition, scout, sample or construct in advance of human teammates, transition to assistant roles when the crew are present, and then become care-takers when the crew returns to Earth. This paper will describe advances in robot safety and command interaction approaches developed to form effective human-robot teams, overcoming challenges of time delay and adapting as the team transitions from robot only to robots and crew. The work is predicated on the idea that when robots are alone in space, they are still part of a human-robot team acting as surrogates for people back on Earth or in other distant locations. Software, interaction modes and control methods will be described that can operate robots in all these conditions. A novel control mode for operating robots across time delay was developed using a graphical simulation on the human side of the communication, allowing a remote supervisor to drive and command a robot in simulation with no time delay, then monitor progress of the actual robot as data returns from the round trip to and from the robot. Since the robot must be responsible for safety out to at least the round trip time period, the authors developed a multi layer safety system able to detect and protect the robot and people in its workspace. This safety system is also running when humans are in direct contact with the robot

  3. Real-time 2-D Phased Array Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon; Hansen, Kristoffer Lindskov; Fogh, Nikolaj

    2018-01-01

    Echocardiography examination of the blood flow is currently either restricted to 1-D techniques in real-time or experimental off-line 2-D methods. This paper presents an implementation of transverse oscillation for real-time 2-D vector flow imaging (VFI) on a commercial BK Ultrasound scanner....... A large field-of-view (FOV) sequence for studying flow dynamics at 11 frames per second (fps) and a sequence for studying peak systolic velocities (PSV) with a narrow FOV at 36 fps were validated. The VFI sequences were validated in a flow-rig with continuous laminar parabolic flow and in a pulsating flow...

  4. Robotically-adjustable microstereotactic frames for image-guided neurosurgery

    Science.gov (United States)

    Kratchman, Louis B.; Fitzpatrick, J. Michael

    2013-03-01

    Stereotactic frames are a standard tool for neurosurgical targeting, but are uncomfortable for patients and obstruct the surgical field. Microstereotactic frames are more comfortable for patients, provide better access to the surgical site, and have grown in popularity as an alternative to traditional stereotactic devices. However, clinically available microstereotactic frames require either lengthy manufacturing delays or expensive image guidance systems. We introduce a robotically-adjusted, disposable microstereotactic frame for deep brain stimulation surgery that eliminates the drawbacks of existing microstereotactic frames. Our frame can be automatically adjusted in the operating room using a preoperative plan in less than five minutes. A validation study on phantoms shows that our approach provides a target positioning error of 0.14 mm, which exceeds the required accuracy for deep brain stimulation surgery.

  5. Real-Time Implementation of Medical Ultrasound Strain Imaging System

    International Nuclear Information System (INIS)

    Jeong, Mok Kun; Kwon, Sung Jae; Bae, Moo Ho

    2008-01-01

    Strain imaging in a medical ultrasound imaging system can differentiate the cancer or tumor in a lesion that is stiffer than the surrounding tissue. In this paper, a strain imaging technique using quasistatic compression is implemented that estimates the displacement between pre- and postcompression ultrasound echoes and obtains strain by differentiating it in the spatial direction. Displacements are computed from the phase difference of complex baseband signals obtained using their autocorrelation, and errors associated with converting the phase difference into time or distance are compensated for by taking into the center frequency variation. Also, to reduce the effect of operator's hand motion, the displacements of all scanlines are normalized with the result that satisfactory strain image quality has been obtained. These techniques have been incorporated into implementing a medical ultrasound strain imaging system that operates in real time.

  6. Image navigation as a means to expand the boundaries of fluorescence-guided surgery.

    Science.gov (United States)

    Brouwer, Oscar R; Buckle, Tessa; Bunschoten, Anton; Kuil, Joeri; Vahrmeijer, Alexander L; Wendler, Thomas; Valdés-Olmos, Renato A; van der Poel, Henk G; van Leeuwen, Fijs W B

    2012-05-21

    Hybrid tracers that are both radioactive and fluorescent help extend the use of fluorescence-guided surgery to deeper structures. Such hybrid tracers facilitate preoperative surgical planning using (3D) scintigraphic images and enable synchronous intraoperative radio- and fluorescence guidance. Nevertheless, we previously found that improved orientation during laparoscopic surgery remains desirable. Here we illustrate how intraoperative navigation based on optical tracking of a fluorescence endoscope may help further improve the accuracy of hybrid surgical guidance. After feeding SPECT/CT images with an optical fiducial as a reference target to the navigation system, optical tracking could be used to position the tip of the fluorescence endoscope relative to the preoperative 3D imaging data. This hybrid navigation approach allowed us to accurately identify marker seeds in a phantom setup. The multispectral nature of the fluorescence endoscope enabled stepwise visualization of the two clinically approved fluorescent dyes, fluorescein and indocyanine green. In addition, the approach was used to navigate toward the prostate in a patient undergoing robot-assisted prostatectomy. Navigation of the tracked fluorescence endoscope toward the target identified on SPECT/CT resulted in real-time gradual visualization of the fluorescent signal in the prostate, thus providing an intraoperative confirmation of the navigation accuracy.

  7. Fiber Bragg gratings-based sensing for real-time needle tracking during MR-guided brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Borot de Battisti, Maxence, E-mail: M.E.P.Borot@umcutrecht.nl; Maenhout, Metha; Lagendijk, Jan J. W.; Vulpen, Marco van; Moerland, Marinus A. [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, Utrecht 3584 CX (Netherlands); Denis de Senneville, Baudouin [Imaging Division, University Medical Center Utrecht, Heidelberglaan 100, Utrecht 3584 CX, The Netherlands and IMB, UMR 5251 CNRS/University of Bordeaux, Talence 33400 (France); Hautvast, Gilion; Binnekamp, Dirk [Philips Group Innovation Biomedical Systems, Eindhoven 5656 AE (Netherlands)

    2016-10-15

    Purpose: The development of MR-guided high dose rate (HDR) brachytherapy is under investigation due to the excellent tumor and organs at risk visualization of MRI. However, MR-based localization of needles (including catheters or tubes) has inherently a low update rate and the required image interpretation can be hampered by signal voids arising from blood vessels or calcifications limiting the precision of the needle guidance and reconstruction. In this paper, a new needle tracking prototype is investigated using fiber Bragg gratings (FBG)-based sensing: this prototype involves a MR-compatible stylet composed of three optic fibers with nine sets of embedded FBG sensors each. This stylet can be inserted into brachytherapy needles and allows a fast measurement of the needle deflection. This study aims to assess the potential of FBG-based sensing for real-time needle (including catheter or tube) tracking during MR-guided intervention. Methods: First, the MR compatibility of FBG-based sensing and its accuracy was evaluated. Different known needle deflections were measured using FBG-based sensing during simultaneous MR-imaging. Then, a needle tracking procedure using FBG-based sensing was proposed. This procedure involved a MR-based calibration of the FBG-based system performed prior to the interventional procedure. The needle tracking system was assessed in an experiment with a moving phantom during MR imaging. The FBG-based system was quantified by comparing the gold-standard shapes, the shape manually segmented on MRI and the FBG-based measurements. Results: The evaluation of the MR compatibility of FBG-based sensing and its accuracy shows that the needle deflection could be measured with an accuracy of 0.27 mm on average. Besides, the FBG-based measurements were comparable to the uncertainty of MR-based measurements estimated at half the voxel size in the MR image. Finally, the mean(standard deviation) Euclidean distance between MR- and FBG-based needle position

  8. Unsynchronized scanning with a low-cost laser range finder for real-time range imaging

    Science.gov (United States)

    Hatipoglu, Isa; Nakhmani, Arie

    2017-06-01

    Range imaging plays an essential role in many fields: 3D modeling, robotics, heritage, agriculture, forestry, reverse engineering. One of the most popular range-measuring technologies is laser scanner due to its several advantages: long range, high precision, real-time measurement capabilities, and no dependence on lighting conditions. However, laser scanners are very costly. Their high cost prevents widespread use in applications. Due to the latest developments in technology, now, low-cost, reliable, faster, and light-weight 1D laser range finders (LRFs) are available. A low-cost 1D LRF with a scanning mechanism, providing the ability of laser beam steering for additional dimensions, enables to capture a depth map. In this work, we present an unsynchronized scanning with a low-cost LRF to decrease scanning period and reduce vibrations caused by stop-scan in synchronized scanning. Moreover, we developed an algorithm for alignment of unsynchronized raw data and proposed range image post-processing framework. The proposed technique enables to have a range imaging system for a fraction of the price of its counterparts. The results prove that the proposed method can fulfill the need for a low-cost laser scanning for range imaging for static environments because the most significant limitation of the method is the scanning period which is about 2 minutes for 55,000 range points (resolution of 250x220 image). In contrast, scanning the same image takes around 4 minutes in synchronized scanning. Once faster, longer range, and narrow beam LRFs are available, the methods proposed in this work can produce better results.

  9. TH-EF-BRA-05: A Method of Near Real-Time 4D MRI Using Volumetric Dynamic Keyhole (VDK) in the Presence of Respiratory Motion for MR-Guided Radiotherapy

    International Nuclear Information System (INIS)

    Lewis, B; Kim, S; Kim, T

    2016-01-01

    Purpose: To develop a novel method that enables 4D MR imaging in near real-time for continuous monitoring of tumor motion in MR-guided radiotherapy. Methods: This method is mainly based on an idea of expanding dynamic keyhole to full volumetric imaging acquisition. In the VDK approach introduced in this study, a library of peripheral volumetric k-space data is generated in given number of phases (5 and 10 in this study) in advance. For 4D MRI at any given time, only volumetric central k-space data are acquired in real-time and combined with pre-acquired peripheral volumetric k-space data in the library corresponding to the respiratory phase (or amplitude). The combined k-space data are Fourier-transformed to MR images. For simulation study, an MRXCAT program was used to generate synthetic MR images of the thorax with desired respiratory motion, contrast levels, and spatial and temporal resolution. 20 phases of volumetric MR images, with 200 ms temporal resolution in 4 s respiratory period, were generated using balanced steady-state free precession MR pulse sequence. The total acquisition time was 21.5s/phase with a voxel size of 3×3×5 mm 3 and an image matrix of 128×128×56. Image similarity was evaluated with difference maps between the reference and reconstructed images. The VDK, conventional keyhole, and zero filling methods were compared for this simulation study. Results: Using 80% of the ky data and 70% of the kz data from the library resulted in 12.20% average intensity difference from the reference, and 21.60% and 28.45% difference in threshold pixel difference for conventional keyhole and zero filling, respectively. The imaging time will be reduced from 21.5s to 1.3s per volume using the VDK method. Conclusion: Near real-time 4D MR imaging can be achieved using the volumetric dynamic keyhole method. That makes the possibility of utilizing 4D MRI during MR-guided radiotherapy.

  10. A MR-conditional High-torque Pneumatic Stepper Motor for MRI-guided and Robot-assisted Intervention

    Science.gov (United States)

    Chen, Yue; Kwok, Ka-Wai; Tse, Zion Tsz Ho

    2015-01-01

    Magnetic Resonance Imaging allows for visualizing detailed pathological and morphological changes of soft tissue. This increasingly attracts attention on MRI-guided intervention; hence, MR-conditional actuations have been widely investigated for development of image-guided and robot-assisted surgical devices under the MRI. This paper presents a simple design of MR-conditional stepper motor which can provide precise and high-torque actuation without adversely affecting the MR image quality. This stepper motor consists of two MR-conditional pneumatic cylinders and the corresponding supporting structures. Alternating the pressurized air can drive the motor to rotate each step in 3.6° with the motor coupled to a planetary gearbox. Experimental studies were conducted to validate its dynamics performance. Maximum 800mNm output torque can be achieved. The motor accuracy independently varied by two factors: motor operating speed and step size, was also investigated. The motor was tested within a Siemens 3T MRI scanner. The image artifact and the signal-to-noise ratio (SNR) were evaluated in order to study its MRI compliancy. The results show that the presented pneumatic stepper motor generated 2.35% SNR reduction in MR images and no observable artifact was presented besides the motor body itself. The proposed motor test also demonstrates a standard to evaluate the motor capability for later incorporation with motorized devices used in robot-assisted surgery under MRI. PMID:24957635

  11. Haptic feedback in OP:Sense - augmented reality in telemanipulated robotic surgery.

    Science.gov (United States)

    Beyl, T; Nicolai, P; Mönnich, H; Raczkowksy, J; Wörn, H

    2012-01-01

    In current research, haptic feedback in robot assisted interventions plays an important role. However most approaches to haptic feedback only regard the mapping of the current forces at the surgical instrument to the haptic input devices, whereas surgeons demand a combination of medical imaging and telemanipulated robotic setups. In this paper we describe how this feature is integrated in our robotic research platform OP:Sense. The proposed method allows the automatic transfer of segmented imaging data to the haptic renderer and therefore allows enriching the haptic feedback with virtual fixtures based on imaging data. Anatomical structures are extracted from pre-operative generated medical images or virtual walls are defined by the surgeon inside the imaging data. Combining real forces with virtual fixtures can guide the surgeon to the regions of interest as well as helps to prevent the risk of damage to critical structures inside the patient. We believe that the combination of medical imaging and telemanipulation is a crucial step for the next generation of MIRS-systems.

  12. Towards real-time diffuse optical tomography for imaging brain functions cooperated with Kalman estimator

    Science.gov (United States)

    Wang, Bingyuan; Zhang, Yao; Liu, Dongyuan; Ding, Xuemei; Dan, Mai; Pan, Tiantian; Wang, Yihan; Li, Jiao; Zhou, Zhongxing; Zhang, Limin; Zhao, Huijuan; Gao, Feng

    2018-02-01

    Functional near-infrared spectroscopy (fNIRS) is a non-invasive neuroimaging method to monitor the cerebral hemodynamic through the optical changes measured at the scalp surface. It has played a more and more important role in psychology and medical imaging communities. Real-time imaging of brain function using NIRS makes it possible to explore some sophisticated human brain functions unexplored before. Kalman estimator has been frequently used in combination with modified Beer-Lamber Law (MBLL) based optical topology (OT), for real-time brain function imaging. However, the spatial resolution of the OT is low, hampering the application of OT in exploring some complicated brain functions. In this paper, we develop a real-time imaging method combining diffuse optical tomography (DOT) and Kalman estimator, much improving the spatial resolution. Instead of only presenting one spatially distributed image indicating the changes of the absorption coefficients at each time point during the recording process, one real-time updated image using the Kalman estimator is provided. Its each voxel represents the amplitude of the hemodynamic response function (HRF) associated with this voxel. We evaluate this method using some simulation experiments, demonstrating that this method can obtain more reliable spatial resolution images. Furthermore, a statistical analysis is also conducted to help to decide whether a voxel in the field of view is activated or not.

  13. Noise reduction in real time x-ray images

    International Nuclear Information System (INIS)

    Tsuda, Motohisa; Kimura, Yutaro

    1986-01-01

    The signal-to-noise ratio of real-time digital X-ray imaging systems consisting of an X-ray image intensifer-television chain was investigated while concentrating on the effect of the X-ray quantum nature. Along with conventional signal accumulation, logarithmic conversion and subtraction, a new technique called the peak hold method is introduced. Theoretical and simulational studies were made with practical parameters. Theory and simulation showed good agreement. An accumulation of signal is most effective for improving the signal-to-noise ratio; the peak-hold method comes next. The peak hold method, however, offers a new image-display mode. Moreover, this method is superior to signal accumulation for specific conditions. (author)

  14. MO-DE-210-03: Ultrasound imaging is an attractive method for image guided radiation treatment (IGRT), by itself or to complement other imaging modalities

    International Nuclear Information System (INIS)

    Ding, K.

    2015-01-01

    Ultrasound imaging is an attractive method for image guided radiation treatment (IGRT), by itself or to complement other imaging modalities. It is inexpensive, portable and provides good soft tissue contrast. For challenging soft tissue targets such as pancreatic cancer, ultrasound imaging can be used in combination with pre-treatment MRI and/or CT to transfer important anatomical features for target localization at time of treatment. The non-invasive and non-ionizing nature of ultrasound imaging is particularly powerful for intra-fraction localization and monitoring. Recognizing these advantages, efforts are being made to incorporate novel robotic approaches to position and manipulate the ultrasound probe during irradiation. These recent enabling developments hold potential to bring ultrasound imaging to a new level of IGRT applications. However, many challenges, not limited to image registration, robotic deployment, probe interference and image acquisition rate, need to be addressed to realize the full potential of IGRT with ultrasound imaging. Learning Objectives: Understand the benefits and limitations in using ultrasound to augment MRI and/or CT for motion monitoring during radiation therapy delivery. Understanding passive and active robotic approaches to implement ultrasound imaging for intra-fraction monitoring. Understand issues of probe interference with radiotherapy treatment. Understand the critical clinical workflow for effective and reproducible IGRT using ultrasound guidance. The work of X.L. is supported in part by Elekta; J.W. and K.D. is supported in part by a NIH grant R01 CA161613 and by Elekta; D.H. is support in part by a NIH grant R41 CA174089

  15. MO-DE-210-03: Ultrasound imaging is an attractive method for image guided radiation treatment (IGRT), by itself or to complement other imaging modalities

    Energy Technology Data Exchange (ETDEWEB)

    Ding, K. [Johns Hopkins University: Development of Intra-Fraction Soft Tissue Monitoring with Ultrasound Imaging (United States)

    2015-06-15

    Ultrasound imaging is an attractive method for image guided radiation treatment (IGRT), by itself or to complement other imaging modalities. It is inexpensive, portable and provides good soft tissue contrast. For challenging soft tissue targets such as pancreatic cancer, ultrasound imaging can be used in combination with pre-treatment MRI and/or CT to transfer important anatomical features for target localization at time of treatment. The non-invasive and non-ionizing nature of ultrasound imaging is particularly powerful for intra-fraction localization and monitoring. Recognizing these advantages, efforts are being made to incorporate novel robotic approaches to position and manipulate the ultrasound probe during irradiation. These recent enabling developments hold potential to bring ultrasound imaging to a new level of IGRT applications. However, many challenges, not limited to image registration, robotic deployment, probe interference and image acquisition rate, need to be addressed to realize the full potential of IGRT with ultrasound imaging. Learning Objectives: Understand the benefits and limitations in using ultrasound to augment MRI and/or CT for motion monitoring during radiation therapy delivery. Understanding passive and active robotic approaches to implement ultrasound imaging for intra-fraction monitoring. Understand issues of probe interference with radiotherapy treatment. Understand the critical clinical workflow for effective and reproducible IGRT using ultrasound guidance. The work of X.L. is supported in part by Elekta; J.W. and K.D. is supported in part by a NIH grant R01 CA161613 and by Elekta; D.H. is support in part by a NIH grant R41 CA174089.

  16. Shoulder-Mounted Robot for MRI-guided arthrography: Accuracy and mounting study.

    Science.gov (United States)

    Monfaredi, R; Wilson, E; Sze, R; Sharma, K; Azizi, B; Iordachita, I; Cleary, K

    2015-08-01

    A new version of our compact and lightweight patient-mounted MRI-compatible 4 degree-of-freedom (DOF) robot for MRI-guided arthrography procedures is introduced. This robot could convert the traditional two-stage arthrography procedure (fluoroscopy-guided needle insertion followed by a diagnostic MRI scan) to a one-stage procedure, all in the MRI suite. The results of a recent accuracy study are reported. A new mounting technique is proposed and the mounting stability is investigated using optical and electromagnetic tracking on an anthropomorphic phantom. Five volunteer subjects including 2 radiologists were asked to conduct needle insertion in 4 different random positions and orientations within the robot's workspace and the displacement of the base of the robot was investigated during robot motion and needle insertion. Experimental results show that the proposed mounting method is stable and promising for clinical application.

  17. Wetland Plant Guide for Assessing Habitat Impacts of Real-Time Salinity Management

    OpenAIRE

    Quinn, Nigel W.T.; Feldmann, Sara A.

    2004-01-01

    This wetland plant guide was developed to aid moist soil plant identification and to assist in the mapping of waterfowl and shorebird habitat in the Grassland Water District and surrounding wetland areas. The motivation for this habitat mapping project was a concern that real-time salinity management of wetland drainage might have long-term consequences for wildfowl habitat health -- changes in wetland drawdown schedules might, over the long term, lead to increased soil salinity and othe...

  18. MR defecography at 1.5 Tesla with radial real-time imaging at a reduced FOV

    International Nuclear Information System (INIS)

    Tacke, J.; Nolte-Ernsting, C.; Glowinski, A.; Adam, G.; Guenther, R.W.

    1999-01-01

    Purpose: To evaluate a new technique for MR defecography with real-time imaging using radial k-space profiles. Materials and Methods: A catheter-mounted condom was inserted into the rectum of 16 patients and filled in situ by a mixture of Nestargel trademark and Gadolinium. After multiplanar imaging of the pelvis by high resolution T 2 -weighted turbo-spin echo sequences, defecation was imaged by a gradient echo sequence with radial k-space filling using a reduced field of view (rFOV) in real-time. The documentation was performed on an S-VHS recorder. Results: At a constant background signal, radial k-space filling yields a real-time impression. An interactive software allowed the operator to modify the slice thickness, slice plane, flip angle and slice angulation during scanning, resulting in an optimum imaging quality of the defecation. Conclusions: This new imaging technique allows real-time MR defecography in a high-field scanner and provides all anatomical and functional information of the defecation. (orig.) [de

  19. A Bayesian approach to real-time 3D tumor localization via monoscopic x-ray imaging during treatment delivery

    International Nuclear Information System (INIS)

    Li, Ruijiang; Fahimian, Benjamin P.; Xing, Lei

    2011-01-01

    statistically significant. Conclusions: The proposed algorithm eliminates the need for any population based model parameters in monoscopic image guided radiotherapy and allows accurate and real-time 3D tumor localization on current standard LINACs with a single x-ray imager.

  20. Effective image differencing with convolutional neural networks for real-time transient hunting

    Science.gov (United States)

    Sedaghat, Nima; Mahabal, Ashish

    2018-06-01

    Large sky surveys are increasingly relying on image subtraction pipelines for real-time (and archival) transient detection. In this process one has to contend with varying point-spread function (PSF) and small brightness variations in many sources, as well as artefacts resulting from saturated stars and, in general, matching errors. Very often the differencing is done with a reference image that is deeper than individual images and the attendant difference in noise characteristics can also lead to artefacts. We present here a deep-learning approach to transient detection that encapsulates all the steps of a traditional image-subtraction pipeline - image registration, background subtraction, noise removal, PSF matching and subtraction - in a single real-time convolutional network. Once trained, the method works lightening-fast and, given that it performs multiple steps in one go, the time saved and false positives eliminated for multi-CCD surveys like Zwicky Transient Facility and Large Synoptic Survey Telescope will be immense, as millions of subtractions will be needed per night.

  1. Using the visitor experiences for mapping the possibilities of implementing a robotic guide in outdoor sites

    NARCIS (Netherlands)

    Karreman, Daphne Eleonora; van Dijk, Elisabeth M.A.G.; Evers, Vanessa

    2012-01-01

    FROG (Fun Robotic Outdoor Guide) is a project that aims to develop an outdoor robotic guide that enriches the visitor experience in touristic sites. This paper is a first step toward a guide robot and presents a case study on how to analyze the visitors’ experience and examine opportunities for a

  2. Future of medical physics: Real-time MRI-guided proton therapy.

    Science.gov (United States)

    Oborn, Bradley M; Dowdell, Stephen; Metcalfe, Peter E; Crozier, Stuart; Mohan, Radhe; Keall, Paul J

    2017-08-01

    With the recent clinical implementation of real-time MRI-guided x-ray beam therapy (MRXT), attention is turning to the concept of combining real-time MRI guidance with proton beam therapy; MRI-guided proton beam therapy (MRPT). MRI guidance for proton beam therapy is expected to offer a compelling improvement to the current treatment workflow which is warranted arguably more than for x-ray beam therapy. This argument is born out of the fact that proton therapy toxicity outcomes are similar to that of the most advanced IMRT treatments, despite being a fundamentally superior particle for cancer treatment. In this Future of Medical Physics article, we describe the various software and hardware aspects of potential MRPT systems and the corresponding treatment workflow. Significant software developments, particularly focused around adaptive MRI-based planning will be required. The magnetic interaction between the MRI and the proton beamline components will be a key area of focus. For example, the modeling and potential redesign of a magnetically compatible gantry to allow for beam delivery from multiple angles towards a patient located within the bore of an MRI scanner. Further to this, the accuracy of pencil beam scanning and beam monitoring in the presence of an MRI fringe field will require modeling, testing, and potential further development to ensure that the highly targeted radiotherapy is maintained. Looking forward we envisage a clear and accelerated path for hardware development, leveraging from lessons learnt from MRXT development. Within few years, simple prototype systems will likely exist, and in a decade, we could envisage coupled systems with integrated gantries. Such milestones will be key in the development of a more efficient, more accurate, and more successful form of proton beam therapy for many common cancer sites. © 2017 American Association of Physicists in Medicine.

  3. Vision-based online vibration estimation of the in-vessel inspection flexible robot with short-time Fourier transformation

    International Nuclear Information System (INIS)

    Wang, Hesheng; Chen, Weidong; Xu, Lifei; He, Tao

    2015-01-01

    Highlights: • Vision-based online vibration estimation method for a flexible arm is proposed. • The vibration signal is obtained by image processing in unknown environments. • Vibration parameters are estimated by short-time Fourier transformation. - Abstract: The vibration should be suppressed if it happens during the motion of a flexible robot or under the influence of external disturbance caused by its structural features and material properties, because the vibration may affect the positioning accuracy and image quality. In Tokamak environment, we need to get the real-time vibration information on vibration suppression of robotic arm, however, some sensors are not allowed in the extreme Tokamak environment. This paper proposed a vision-based method for online vibration estimation of a flexible manipulator, which is achieved by utilizing the environment image information from the end-effector camera to estimate its vibration. Short-time Fourier Transformation with adaptive window length method is used to estimate vibration parameters of non-stationary vibration signals. Experiments with one-link flexible manipulator equipped with camera are carried out to validate the feasibility of this method in this paper.

  4. Vision-based online vibration estimation of the in-vessel inspection flexible robot with short-time Fourier transformation

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hesheng [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Chen, Weidong, E-mail: wdchen@sjtu.edu.cn [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China); Xu, Lifei; He, Tao [Key Laboratory of System Control and Information Processing, Ministry of Education of China (China); Department of Automation, Shanghai Jiao Tong University, Shanghai 200240 (China)

    2015-10-15

    Highlights: • Vision-based online vibration estimation method for a flexible arm is proposed. • The vibration signal is obtained by image processing in unknown environments. • Vibration parameters are estimated by short-time Fourier transformation. - Abstract: The vibration should be suppressed if it happens during the motion of a flexible robot or under the influence of external disturbance caused by its structural features and material properties, because the vibration may affect the positioning accuracy and image quality. In Tokamak environment, we need to get the real-time vibration information on vibration suppression of robotic arm, however, some sensors are not allowed in the extreme Tokamak environment. This paper proposed a vision-based method for online vibration estimation of a flexible manipulator, which is achieved by utilizing the environment image information from the end-effector camera to estimate its vibration. Short-time Fourier Transformation with adaptive window length method is used to estimate vibration parameters of non-stationary vibration signals. Experiments with one-link flexible manipulator equipped with camera are carried out to validate the feasibility of this method in this paper.

  5. Fast Segmentation of Colour Apple Image under All-Weather Natural Conditions for Vision Recognition of Picking Robots

    Directory of Open Access Journals (Sweden)

    Wei Ji

    2016-02-01

    Full Text Available In order to resolve the poor real-time performance problem of the normalized cut (Ncut method in apple vision recognition of picking robots, a fast segmentation method of colour apple images based on the adaptive mean-shift and Ncut methods is proposed in this paper. Firstly, the traditional Ncut method based on pixels is changed into the Ncut method based on regions by the adaptive mean-shift initial segmenting. In this way, the number of peaks and edges in the image is dramatically reduced and the computation speed is improved. Secondly, the image is divided into regional maps by extracting the R-B colour feature, which not only reduces the quantity of regions, but also to some extent overcomes the effect on illumination. On this basis, every region map is expressed by a region point, so the undirected graph of the R-B colour grey-level feature is attained. Finally, regarding the undirected graph as the input of Ncut, we construct the weight matrix W by region points and determine the number of clusters based on the decision-theoretic rough set. The adaptive clustering segmentation can be implemented by an Ncut algorithm. Experimental results show that the maximum segmentation error is 3% and the average recognition time is less than 0.7s, which can meet the requirements of a real-time picking robot.

  6. Discrete time motion model for guiding people in urban areas using multiple robots

    OpenAIRE

    Garrell Zulueta, Anais; Sanfeliu Cortés, Alberto; Moreno-Noguer, Francesc

    2009-01-01

    We present a new model for people guidance in urban settings using several mobile robots, that overcomes the limitations of existing approaches, which are either tailored to tightly bounded environments, or based on unrealistic human behaviors. Although the robots motion is controlled by means of a standard particle filter formulation, the novelty of our approach resides in how the environment and human and robot motions are modeled. In particular we define a “Discrete-Time-Motion” model, whi...

  7. QuickPALM: 3D real-time photoactivation nanoscopy image processing in ImageJ

    CSIR Research Space (South Africa)

    Henriques, R

    2010-05-01

    Full Text Available QuickPALM in conjunction with the acquisition of control features provides a complete solution for the acquisition, reconstruction and visualization of 3D PALM or STORM images, achieving resolutions of ~40 nm in real time. This software package...

  8. Real-time Image Processing for Microscopy-based Label-free Imaging Flow Cytometry in a Microfluidic Chip.

    Science.gov (United States)

    Heo, Young Jin; Lee, Donghyeon; Kang, Junsu; Lee, Keondo; Chung, Wan Kyun

    2017-09-14

    Imaging flow cytometry (IFC) is an emerging technology that acquires single-cell images at high-throughput for analysis of a cell population. Rich information that comes from high sensitivity and spatial resolution of a single-cell microscopic image is beneficial for single-cell analysis in various biological applications. In this paper, we present a fast image-processing pipeline (R-MOD: Real-time Moving Object Detector) based on deep learning for high-throughput microscopy-based label-free IFC in a microfluidic chip. The R-MOD pipeline acquires all single-cell images of cells in flow, and identifies the acquired images as a real-time process with minimum hardware that consists of a microscope and a high-speed camera. Experiments show that R-MOD has the fast and reliable accuracy (500 fps and 93.3% mAP), and is expected to be used as a powerful tool for biomedical and clinical applications.

  9. Real-time image restoration for iris recognition systems.

    Science.gov (United States)

    Kang, Byung Jun; Park, Kang Ryoung

    2007-12-01

    In the field of biometrics, it has been reported that iris recognition techniques have shown high levels of accuracy because unique patterns of the human iris, which has very many degrees of freedom, are used. However, because conventional iris cameras have small depth-of-field (DOF) areas, input iris images can easily be blurred, which can lead to lower recognition performance, since iris patterns are transformed by the blurring caused by optical defocusing. To overcome these problems, an autofocusing camera can be used. However, this inevitably increases the cost, size, and complexity of the system. Therefore, we propose a new real-time iris image-restoration method, which can increase the camera's DOF without requiring any additional hardware. This paper presents five novelties as compared to previous works: 1) by excluding eyelash and eyelid regions, it is possible to obtain more accurate focus scores from input iris images; 2) the parameter of the point spread function (PSF) can be estimated in terms of camera optics and measured focus scores; therefore, parameter estimation is more accurate than it has been in previous research; 3) because the PSF parameter can be obtained by using a predetermined equation, iris image restoration can be done in real-time; 4) by using a constrained least square (CLS) restoration filter that considers noise, performance can be greatly enhanced; and 5) restoration accuracy can also be enhanced by estimating the weight value of the noise-regularization term of the CLS filter according to the amount of image blurring. Experimental results showed that iris recognition errors when using the proposed restoration method were greatly reduced as compared to those results achieved without restoration or those achieved using previous iris-restoration methods.

  10. Real-time image reconstruction and display system for MRI using a high-speed personal computer.

    Science.gov (United States)

    Haishi, T; Kose, K

    1998-09-01

    A real-time NMR image reconstruction and display system was developed using a high-speed personal computer and optimized for the 32-bit multitasking Microsoft Windows 95 operating system. The system was operated at various CPU clock frequencies by changing the motherboard clock frequency and the processor/bus frequency ratio. When the Pentium CPU was used at the 200 MHz clock frequency, the reconstruction time for one 128 x 128 pixel image was 48 ms and that for the image display on the enlarged 256 x 256 pixel window was about 8 ms. NMR imaging experiments were performed with three fast imaging sequences (FLASH, multishot EPI, and one-shot EPI) to demonstrate the ability of the real-time system. It was concluded that in most cases, high-speed PC would be the best choice for the image reconstruction and display system for real-time MRI. Copyright 1998 Academic Press.

  11. Experimental MR-guided cryotherapy of the brain with almost real-time imaging by radial k-space scanning

    International Nuclear Information System (INIS)

    Tacke, J.; Schorn, R.; Glowinski, A.; Grosskortenhaus, S.; Adam, G.; Guenther, R.W.; Rasche, V.

    1999-01-01

    Purpose: To test radial k-space scanning by MR fluoroscopy to guide and control MR-guided interstitial cryotherapy of the healthy pig brain. Methods: After MR tomographic planning of the approach, an MR-compatible experimental cryotherapy probe of 2.7 mm diameter was introduced through a 5 mm burr hole into the right frontal brain of five healthy pigs. The freeze-thaw cycles were imaged using a T 1 -weighted gradient echo sequence with radial k-space scanning in coronal, sagittal, and axial directions. Results: The high temporal resolution of the chosen sequence permits a continuous representation of the freezing process with good image quality and high contrast between ice and unfrozen brain parenchyma. Because of the interactive conception of the sequence the layer plane could be chosen as desired during the measurement. Ice formation was sharply demarcated, spherically configurated, and was free of signals. Its maximum diameter was 13 mm. Conclusions: With use of the novel, interactively controllable gradient echo sequence with radial k-space scanning, guidance of the intervention under fluoroscopic conditions with the advantages of MRT is possible. MR-guided cryotherapy allows a minimally-invasive, precisely dosable focal tissue ablation. (orig.) [de

  12. Modeling and simulation of tumor-influenced high resolution real-time physics-based breast models for model-guided robotic interventions

    Science.gov (United States)

    Neylon, John; Hasse, Katelyn; Sheng, Ke; Santhanam, Anand P.

    2016-03-01

    Breast radiation therapy is typically delivered to the patient in either supine or prone position. Each of these positioning systems has its limitations in terms of tumor localization, dose to the surrounding normal structures, and patient comfort. We envision developing a pneumatically controlled breast immobilization device that will enable the benefits of both supine and prone positioning. In this paper, we present a physics-based breast deformable model that aids in both the design of the breast immobilization device as well as a control module for the device during every day positioning. The model geometry is generated from a subject's CT scan acquired during the treatment planning stage. A GPU based deformable model is then generated for the breast. A mass-spring-damper approach is then employed for the deformable model, with the spring modeled to represent a hyperelastic tissue behavior. Each voxel of the CT scan is then associated with a mass element, which gives the model its high resolution nature. The subject specific elasticity is then estimated from a CT scan in prone position. Our results show that the model can deform at >60 deformations per second, which satisfies the real-time requirement for robotic positioning. The model interacts with a computer designed immobilization device to position the breast and tumor anatomy in a reproducible location. The design of the immobilization device was also systematically varied based on the breast geometry, tumor location, elasticity distribution and the reproducibility of the desired tumor location.

  13. Development of embedded real-time and high-speed vision platform

    Science.gov (United States)

    Ouyang, Zhenxing; Dong, Yimin; Yang, Hua

    2015-12-01

    Currently, high-speed vision platforms are widely used in many applications, such as robotics and automation industry. However, a personal computer (PC) whose over-large size is not suitable and applicable in compact systems is an indispensable component for human-computer interaction in traditional high-speed vision platforms. Therefore, this paper develops an embedded real-time and high-speed vision platform, ER-HVP Vision which is able to work completely out of PC. In this new platform, an embedded CPU-based board is designed as substitution for PC and a DSP and FPGA board is developed for implementing image parallel algorithms in FPGA and image sequential algorithms in DSP. Hence, the capability of ER-HVP Vision with size of 320mm x 250mm x 87mm can be presented in more compact condition. Experimental results are also given to indicate that the real-time detection and counting of the moving target at a frame rate of 200 fps at 512 x 512 pixels under the operation of this newly developed vision platform are feasible.

  14. Platform for Automated Real-Time High Performance Analytics on Medical Image Data.

    Science.gov (United States)

    Allen, William J; Gabr, Refaat E; Tefera, Getaneh B; Pednekar, Amol S; Vaughn, Matthew W; Narayana, Ponnada A

    2018-03-01

    Biomedical data are quickly growing in volume and in variety, providing clinicians an opportunity for better clinical decision support. Here, we demonstrate a robust platform that uses software automation and high performance computing (HPC) resources to achieve real-time analytics of clinical data, specifically magnetic resonance imaging (MRI) data. We used the Agave application programming interface to facilitate communication, data transfer, and job control between an MRI scanner and an off-site HPC resource. In this use case, Agave executed the graphical pipeline tool GRAphical Pipeline Environment (GRAPE) to perform automated, real-time, quantitative analysis of MRI scans. Same-session image processing will open the door for adaptive scanning and real-time quality control, potentially accelerating the discovery of pathologies and minimizing patient callbacks. We envision this platform can be adapted to other medical instruments, HPC resources, and analytics tools.

  15. A FPGA-based architecture for real-time image matching

    Science.gov (United States)

    Wang, Jianhui; Zhong, Sheng; Xu, Wenhui; Zhang, Weijun; Cao, Zhiguo

    2013-10-01

    Image matching is a fundamental task in computer vision. It is used to establish correspondence between two images taken at different viewpoint or different time from the same scene. However, its large computational complexity has been a challenge to most embedded systems. This paper proposes a single FPGA-based image matching system, which consists of SIFT feature detection, BRIEF descriptor extraction and BRIEF matching. It optimizes the FPGA architecture for the SIFT feature detection to reduce the FPGA resources utilization. Moreover, we implement BRIEF description and matching on FPGA also. The proposed system can implement image matching at 30fps (frame per second) for 1280x720 images. Its processing speed can meet the demand of most real-life computer vision applications.

  16. An anti-disturbing real time pose estimation method and system

    Science.gov (United States)

    Zhou, Jian; Zhang, Xiao-hu

    2011-08-01

    method can estimate pose between camera and object when part even all known features are lost, and has a quick response time benefit from GPU parallel computing. The method present here can be used widely in vision-guide techniques to strengthen its intelligence and generalization, which can also play an important role in autonomous navigation and positioning, robots fields at unknown environment. The results of simulation and experiments demonstrate that proposed method could suppress noise effectively, extracted features robustly, and achieve the real time need. Theory analysis and experiment shows the method is reasonable and efficient.

  17. Real-time three-dimensional imaging of epidermal splitting and removal by high-definition optical coherence tomography

    DEFF Research Database (Denmark)

    Boone, Marc; Draye, Jean Pierre; Verween, Gunther

    2014-01-01

    While real-time 3-D evaluation of human skin constructs is needed, only 2-D non-invasive imaging techniques are available. The aim of this paper is to evaluate the potential of high-definition optical coherence tomography (HD-OCT) for real-time 3-D assessment of the epidermal splitting and decell......While real-time 3-D evaluation of human skin constructs is needed, only 2-D non-invasive imaging techniques are available. The aim of this paper is to evaluate the potential of high-definition optical coherence tomography (HD-OCT) for real-time 3-D assessment of the epidermal splitting...... before and after incubation. Real-time 3-D HD-OCT assessment was compared with 2-D en face assessment by reflectance confocal microscopy (RCM). (Immuno) histopathology was used as control. HD-OCT imaging allowed real-time 3-D visualization of the impact of selected agents on epidermal splitting, dermo......-epidermal junction, dermal architecture, vascular spaces and cellularity. RCM has a better resolution (1 μm) than HD-OCT (3 μm), permitting differentiation of different collagen fibres, but HD-OCT imaging has deeper penetration (570 μm) than RCM imaging (200 μm). Dispase II and NaCl treatments were found...

  18. Real-time earthquake source imaging: An offline test for the 2011 Tohoku earthquake

    Science.gov (United States)

    Zhang, Yong; Wang, Rongjiang; Zschau, Jochen; Parolai, Stefano; Dahm, Torsten

    2014-05-01

    In recent decades, great efforts have been expended in real-time seismology aiming at earthquake and tsunami early warning. One of the most important issues is the real-time assessment of earthquake rupture processes using near-field seismogeodetic networks. Currently, earthquake early warning systems are mostly based on the rapid estimate of P-wave magnitude, which contains generally large uncertainties and the known saturation problem. In the case of the 2011 Mw9.0 Tohoku earthquake, JMA (Japan Meteorological Agency) released the first warning of the event with M7.2 after 25 s. The following updates of the magnitude even decreased to M6.3-6.6. Finally, the magnitude estimate stabilized at M8.1 after about two minutes. This led consequently to the underestimated tsunami heights. By using the newly developed Iterative Deconvolution and Stacking (IDS) method for automatic source imaging, we demonstrate an offline test for the real-time analysis of the strong-motion and GPS seismograms of the 2011 Tohoku earthquake. The results show that we had been theoretically able to image the complex rupture process of the 2011 Tohoku earthquake automatically soon after or even during the rupture process. In general, what had happened on the fault could be robustly imaged with a time delay of about 30 s by using either the strong-motion (KiK-net) or the GPS (GEONET) real-time data. This implies that the new real-time source imaging technique is helpful to reduce false and missing warnings, and therefore should play an important role in future tsunami early warning and earthquake rapid response systems.

  19. Dosimetric Comparison of Real-Time MRI-Guided Tri-Cobalt-60 Versus Linear Accelerator-Based Stereotactic Body Radiation Therapy Lung Cancer Plans.

    Science.gov (United States)

    Wojcieszynski, Andrzej P; Hill, Patrick M; Rosenberg, Stephen A; Hullett, Craig R; Labby, Zacariah E; Paliwal, Bhudatt; Geurts, Mark W; Bayliss, R Adam; Bayouth, John E; Harari, Paul M; Bassetti, Michael F; Baschnagel, Andrew M

    2017-06-01

    -quality stereotactic body radiation therapy plans that are clinically acceptable as compared to volumetric-modulated arc therapy-based plans. Real-time magnetic resonance imaging provides the unique capacity to directly observe tumor motion during treatment for purposes of motion management.

  20. Real-time image processing and control interface for remote operation of a microscope

    Science.gov (United States)

    Leng, Hesong; Wilder, Joseph

    1999-08-01

    A real-time image processing and control interface for remote operation of a microscope is presented in this paper. The system has achieved real-time color image display for 640 X 480 pixel images. Multi-resolution image representation can be provided for efficient transmission through the network. Through the control interface the computer can communicate with the programmable microscope via the RS232 serial ports. By choosing one of three scanning patterns, a sequence of images can be saved as BMP or PGM files to record information on an entire microscope slide. The system will be used by medical and graduate students at the University of Medicine and Dentistry of New Jersey for distance learning. It can be used in many network-based telepathology applications.

  1. Development of a device for real-time light-guided vocal fold injection: A preliminary report.

    Science.gov (United States)

    Cha, Wonjae; Ro, Jung Hoon; Wang, Soo-Geun; Jang, Jeon Yeob; Cho, Jae Keun; Kim, Geun-Hyo; Lee, Yeon Woo

    2016-04-01

    Vocal fold injection is a minimally invasive technique for various vocal fold pathologies. The shortcomings of the cricothyroid (CT) membrane approach are mainly related to invisibility of the injection needle. If localization of the needle tip can be improved during vocal fold injection with the CT approach, the current problems of the technique can be overcome. We have conceptualized real-time light-guided vocal fold injection that enables simultaneous injection under precise localization. In this study, we developed a device for real-time light-guided vocal fold injection and applied it in excised canine larynx. Animal model. A single optic fiber was inserted in an unmodified 25-gauge needle. A designated connector for the device was attached to the needle, the optic fiber, and the syringe. A laser diode module was used as the light source. An ex vivo canine larynx model was used to validate the device. The location of the needle tip was accurately indicated, and the depth from the mucosa could be estimated according to the brightness and size of the red light. The needle was inserted and could be localized in the canine vocal fold by the light of the device. Precise injection at the intended location was easily performed with no manipulation of the device or the needle. Real-time light-guided vocal fold injection might be a feasible and promising technique for treatment of vocal fold pathology. It is expected that this technique can improve the precision of vocal fold injection and expand its indication in laryngology. NA. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  2. SignalR real time application development

    CERN Document Server

    Ingebrigtsen, Einar

    2013-01-01

    This step-by-step guide gives you practical advice, tips, and tricks that will have you writing real-time apps quickly and easily.If you are a .NET developer who wants to be at the cutting edge of development, then this book is for you. Real-time application development is made simple in this guide, so as long as you have basic knowledge of .NET, a copy of Visual Studio, and NuGet installed, you are ready to go.

  3. Arrow-bot: A Teaching Tool for Real-Time Embedded System Course

    Directory of Open Access Journals (Sweden)

    Zakaria Mohamad Fauzi

    2017-01-01

    Full Text Available This paper presents the design of a line following Arduino-based mobile robot for Real-Time Embedded System course at Universiti Tun Hussein Onn Malaysia. The real-time system (RTS concept was implementing is based on rate monotonic scheduling (RMS on an ATmega328P microcontroller. Three infrared line sensors were used as input for controlling two direct current (DC motors. A RTS software was programmed in Arduino IDE which relied on a real-time operating system (RTOS of ChibiOS/RT library. Three independent tasks of software functions were created for testing real-time scheduling capability and the result of temporal scope was collected. The microcontroller succeeded to handle multiple tasks without missed their dateline. This implementation of the RTOS in embedded system for mobile robotics system is hoped to increase students understanding and learning capability.

  4. WE-G-18C-08: Real Time Tumor Imaging Using a Novel Dynamic Keyhole MRI Reconstruction Technique

    International Nuclear Information System (INIS)

    Lee, D; Pollock, S; Whelan, B; Keall, P; Greer, P; Kim, T

    2014-01-01

    Purpose: To test the hypothesis that the novel Dynamic Keyhole MRI reconstruction technique can accelerate image acquisition whilst maintaining high image quality for lung cancer patients. Methods: 18 MRI datasets from 5 lung cancer patients were acquired using a 3T MRI scanner. These datasets were retrospectively reconstructed using (A) The novel Dynamic Keyhole technique, (B) The conventional keyhole technique and (C) the conventional zero filling technique. The dynamic keyhole technique in MRI refers to techniques in which previously acquired k-space data is used to supplement under sampled data obtained in real time. The novel Dynamic Keyhole technique utilizes a previously acquired a library of kspace datasets in conjunction with central k-space datasets acquired in realtime. A simultaneously acquired respiratory signal is utilized to sort, match and combine the two k-space streams with respect to respiratory displacement. Reconstruction performance was quantified by (1) comparing the keyhole size (which corresponds to imaging speed) required to achieve the same image quality, and (2) maintaining a constant keyhole size across the three reconstruction methods to compare the resulting image quality to the ground truth image. Results: (1) The dynamic keyhole method required a mean keyhole size which was 48% smaller than the conventional keyhole technique and 60% smaller than the zero filling technique to achieve the same image quality. This directly corresponds to faster imaging. (2) When a constant keyhole size was utilized, the Dynamic Keyhole technique resulted in the smallest difference of the tumor region compared to the ground truth. Conclusion: The dynamic keyhole is a simple and adaptable technique for clinical applications requiring real-time imaging and tumor monitoring such as MRI guided radiotherapy. Based on the results from this study, the dynamic keyhole method could increase the imaging frequency by a factor of five compared with full k

  5. Two dimensional microcirculation mapping with real time spatial frequency domain imaging

    Science.gov (United States)

    Zheng, Yang; Chen, Xinlin; Lin, Weihao; Cao, Zili; Zhu, Xiuwei; Zeng, Bixin; Xu, M.

    2018-02-01

    We present a spatial frequency domain imaging (SFDI) study of local hemodynamics in the human finger cuticle of healthy volunteers performing paced breathing and the forearm of healthy young adults performing normal breathing with our recently developed Real Time Single Snapshot Multiple Frequency Demodulation - Spatial Frequency Domain Imaging (SSMD-SFDI) system. A two-layer model was used to map the concentrations of deoxy-, oxy-hemoglobin, melanin, epidermal thickness and scattering properties at the subsurface of the forearm and the finger cuticle. The oscillations of the concentrations of deoxy- and oxy-hemoglobin at the subsurface of the finger cuticle and forearm induced by paced breathing and normal breathing, respectively, were found to be close to out-of-phase, attributed to the dominance of the blood flow modulation by paced breathing or heartbeat. Our results suggest that the real time SFDI platform may serve as one effective imaging modality for microcirculation monitoring.

  6. Research on Modeling Technology of Virtual Robot Based on LabVIEW

    Science.gov (United States)

    Wang, Z.; Huo, J. L.; Y Sun, L.; Y Hao, X.

    2017-12-01

    Because of the dangerous working environment, the underwater operation robot for nuclear power station needs manual teleoperation. In the process of operation, it is necessary to guide the position and orientation of the robot in real time. In this paper, the geometric modeling of the virtual robot and the working environment is accomplished by using SolidWorks software, and the accurate modeling and assembly of the robot are realized. Using LabVIEW software to read the model, and established the manipulator forward kinematics and inverse kinematics model, and realized the hierarchical modeling of virtual robot and computer graphics modeling. Experimental results show that the method studied in this paper can be successfully applied to robot control system.

  7. Near-real-time feedback control system for liver thermal ablations based on self-referenced temperature imaging

    International Nuclear Information System (INIS)

    Keserci, Bilgin M.; Kokuryo, Daisuke; Suzuki, Kyohei; Kumamoto, Etsuko; Okada, Atsuya; Khankan, Azzam A.; Kuroda, Kagayaki

    2006-01-01

    Our challenge was to design and implement a dedicated temperature imaging feedback control system to guide and assist in a thermal liver ablation procedure in a double-donut 0.5T open MR scanner. This system has near-real-time feedback capability based on a newly developed 'self-referenced' temperature imaging method using 'moving-slab' and complex-field-fitting techniques. Two phantom validation studies and one ex vivo experiment were performed to compare the newly developed self-referenced method with the conventional subtraction method and evaluate the ability of the feedback control system in the same MR scanner. The near-real-time feedback system was achieved by integrating the following primary functions: (1) imaging of the moving organ temperature; (2) on-line needle tip tracking; (3) automatic turn-on/off the heating devices; (4) a Windows operating system-based novel user-interfaces. In the first part of the validation studies, microwave heating was applied in an agar phantom using a fast spoiled gradient recalled echo in a steady state sequence. In the second part of the validation and ex vivo study, target visualization, treatment planning and monitoring, and temperature and thermal dose visualization with the graphical user interface of the thermal ablation software were demonstrated. Furthermore, MR imaging with the 'self-referenced' temperature imaging method has the ability to localize the hot spot in the heated region and measure temperature elevation during the experiment. In conclusion, we have demonstrated an interactively controllable feedback control system that offers a new method for the guidance of liver thermal ablation procedures, as well as improving the ability to assist ablation procedures in an open MR scanner

  8. Real-time MRI guidance of cardiac interventions.

    Science.gov (United States)

    Campbell-Washburn, Adrienne E; Tavallaei, Mohammad A; Pop, Mihaela; Grant, Elena K; Chubb, Henry; Rhode, Kawal; Wright, Graham A

    2017-10-01

    Cardiac magnetic resonance imaging (MRI) is appealing to guide complex cardiac procedures because it is ionizing radiation-free and offers flexible soft-tissue contrast. Interventional cardiac MR promises to improve existing procedures and enable new ones for complex arrhythmias, as well as congenital and structural heart disease. Guiding invasive procedures demands faster image acquisition, reconstruction and analysis, as well as intuitive intraprocedural display of imaging data. Standard cardiac MR techniques such as 3D anatomical imaging, cardiac function and flow, parameter mapping, and late-gadolinium enhancement can be used to gather valuable clinical data at various procedural stages. Rapid intraprocedural image analysis can extract and highlight critical information about interventional targets and outcomes. In some cases, real-time interactive imaging is used to provide a continuous stream of images displayed to interventionalists for dynamic device navigation. Alternatively, devices are navigated relative to a roadmap of major cardiac structures generated through fast segmentation and registration. Interventional devices can be visualized and tracked throughout a procedure with specialized imaging methods. In a clinical setting, advanced imaging must be integrated with other clinical tools and patient data. In order to perform these complex procedures, interventional cardiac MR relies on customized equipment, such as interactive imaging environments, in-room image display, audio communication, hemodynamic monitoring and recording systems, and electroanatomical mapping and ablation systems. Operating in this sophisticated environment requires coordination and planning. This review provides an overview of the imaging technology used in MRI-guided cardiac interventions. Specifically, this review outlines clinical targets, standard image acquisition and analysis tools, and the integration of these tools into clinical workflow. 1 Technical Efficacy: Stage 5 J

  9. A tele-operated mobile ultrasound scanner using a light-weight robot.

    Science.gov (United States)

    Delgorge, Cécile; Courrèges, Fabien; Al Bassit, Lama; Novales, Cyril; Rosenberger, Christophe; Smith-Guerin, Natalie; Brù, Concepció; Gilabert, Rosa; Vannoni, Maurizio; Poisson, Gérard; Vieyres, Pierre

    2005-03-01

    This paper presents a new tele-operated robotic chain for real-time ultrasound image acquisition and medical diagnosis. This system has been developed in the frame of the Mobile Tele-Echography Using an Ultralight Robot European Project. A light-weight six degrees-of-freedom serial robot, with a remote center of motion, has been specially designed for this application. It holds and moves a real probe on a distant patient according to the expert gesture and permits an image acquisition using a standard ultrasound device. The combination of mechanical structure choice for the robot and dedicated control law, particularly nearby the singular configuration allows a good path following and a robotized gesture accuracy. The choice of compression techniques for image transmission enables a compromise between flow and quality. These combined approaches, for robotics and image processing, enable the medical specialist to better control the remote ultrasound probe holder system and to receive stable and good quality ultrasound images to make a diagnosis via any type of communication link from terrestrial to satellite. Clinical tests have been performed since April 2003. They used both satellite or Integrated Services Digital Network lines with a theoretical bandwidth of 384 Kb/s. They showed the tele-echography system helped to identify 66% of lesions and 83% of symptomatic pathologies.

  10. Magnetic Particle Imaging for Real-Time Perfusion Imaging in Acute Stroke.

    Science.gov (United States)

    Ludewig, Peter; Gdaniec, Nadine; Sedlacik, Jan; Forkert, Nils D; Szwargulski, Patryk; Graeser, Matthias; Adam, Gerhard; Kaul, Michael G; Krishnan, Kannan M; Ferguson, R Matthew; Khandhar, Amit P; Walczak, Piotr; Fiehler, Jens; Thomalla, Götz; Gerloff, Christian; Knopp, Tobias; Magnus, Tim

    2017-10-24

    The fast and accurate assessment of cerebral perfusion is fundamental for the diagnosis and successful treatment of stroke patients. Magnetic particle imaging (MPI) is a new radiation-free tomographic imaging method with a superior temporal resolution, compared to other conventional imaging methods. In addition, MPI scanners can be built as prehospital mobile devices, which require less complex infrastructure than computed tomography (CT) and magnetic resonance imaging (MRI). With these advantages, MPI could accelerate the stroke diagnosis and treatment, thereby improving outcomes. Our objective was to investigate the capabilities of MPI to detect perfusion deficits in a murine model of ischemic stroke. Cerebral ischemia was induced by inserting of a microfilament in the internal carotid artery in C57BL/6 mice, thereby blocking the blood flow into the medial cerebral artery. After the injection of a contrast agent (superparamagnetic iron oxide nanoparticles) specifically tailored for MPI, cerebral perfusion and vascular anatomy were assessed by the MPI scanner within seconds. To validate and compare our MPI data, we performed perfusion imaging with a small animal MRI scanner. MPI detected the perfusion deficits in the ischemic brain, which were comparable to those with MRI but in real-time. For the first time, we showed that MPI could be used as a diagnostic tool for relevant diseases in vivo, such as an ischemic stroke. Due to its shorter image acquisition times and increased temporal resolution compared to that of MRI or CT, we expect that MPI offers the potential to improve stroke imaging and treatment.

  11. Real-Time 3D Image Guidance Using a Standard LINAC: Measured Motion, Accuracy, and Precision of the First Prospective Clinical Trial of Kilovoltage Intrafraction Monitoring-Guided Gating for Prostate Cancer Radiation Therapy

    DEFF Research Database (Denmark)

    Keall, Paul J; Ng, Jin Aun; Juneja, Prabhjot

    2016-01-01

    for prostate cancer radiation therapy. In this paper we report on the measured motion accuracy and precision using real-time KIM-guided gating. METHODS AND MATERIALS: Imaging and motion information from the first 200 fractions from 6 patient prostate cancer radiation therapy volumetric modulated arc therapy...... treatments were analyzed. A 3-mm/5-second action threshold was used to trigger a gating event where the beam is paused and the couch position adjusted to realign the prostate to the treatment isocenter. To quantify the in vivo accuracy and precision, KIM was compared with simultaneously acquired k...

  12. Development of a SG Tube Inspection/maintenance Robot

    International Nuclear Information System (INIS)

    Shin, Ho Cheol; Jung, Kyung Min; Choi, Chang Hwan; Kim, Seung Ho

    2005-01-01

    A radiation hardened robot system is developed which assists in an automatic non-destructive testing and the repair of nuclear steam generator tubes. And a control system is developed. For easy carriage and installation, the robot system consists of three separable parts: a manipulator, a water chamber entering and leaving device of the manipulator and a manipulator base pose adjusting device. The kinematic analysis using the grid method was performed to search for the optimal manipulator's link parameters, and the stress analysis of the robotic system was also carried out for a structural safety verification. The robotic control system consists of a main personal computer placed near the operator and a local robotic position controller placed near the steam generator. A software program to control and manage the robotic system has been developed on the NT based OS to increase the usability. The software program provides a robot installation function, a robot calibration function, a managing and arranging function for the eddy-current test, a real time 3- D graphic simulation function which offers a remote reality to operators and so on. The image information acquired from the camera attached to the end-effector is used to calibrate the end-effector pose error and the time-delayed control algorithm is applied to calculate the optimal PID gain of the position controller. Eddy-current probe guide devices, a brushing tool, a motorized plugging tool and a U-tube internal visual inspection system have been developed. A data acquisition system was built to acquire and process the eddy-current signals, and a software program for eddy-current signal acquisition and processing. The developed robotic system has been tested in the Ulchin NPP type steam generator mockup in a laboratory. The final function test was carried out at the Kori Npp type steam generator mockup in the Kori training center

  13. Towards real-time cardiovascular magnetic resonance guided transarterial CoreValve implantation: in vivo evaluation in swine

    Science.gov (United States)

    2012-01-01

    Background Real-time cardiovascular magnetic resonance (rtCMR) is considered attractive for guiding TAVI. Owing to an unlimited scan plane orientation and an unsurpassed soft-tissue contrast with simultaneous device visualization, rtCMR is presumed to allow safe device navigation and to offer optimal orientation for precise axial positioning. We sought to evaluate the preclinical feasibility of rtCMR-guided transarterial aortic valve implatation (TAVI) using the nitinol-based Medtronic CoreValve bioprosthesis. Methods rtCMR-guided transfemoral (n = 2) and transsubclavian (n = 6) TAVI was performed in 8 swine using the original CoreValve prosthesis and a modified, CMR-compatible delivery catheter without ferromagnetic components. Results rtCMR using TrueFISP sequences provided reliable imaging guidance during TAVI, which was successful in 6 swine. One transfemoral attempt failed due to unsuccessful aortic arch passage and one pericardial tamponade with subsequent death occurred as a result of ventricular perforation by the device tip due to an operating error, this complication being detected without delay by rtCMR. rtCMR allowed for a detailed, simultaneous visualization of the delivery system with the mounted stent-valve and the surrounding anatomy, resulting in improved visualization during navigation through the vasculature, passage of the aortic valve, and during placement and deployment of the stent-valve. Post-interventional success could be confirmed using ECG-triggered time-resolved cine-TrueFISP and flow-sensitive phase-contrast sequences. Intended valve position was confirmed by ex-vivo histology. Conclusions Our study shows that rtCMR-guided TAVI using the commercial CoreValve prosthesis in conjunction with a modified delivery system is feasible in swine, allowing improved procedural guidance including immediate detection of complications and direct functional assessment with reduction of radiation and omission of contrast media. PMID:22453050

  14. Towards real-time cardiovascular magnetic resonance guided transarterial CoreValve implantation: in vivo evaluation in swine.

    Science.gov (United States)

    Kahlert, Philipp; Parohl, Nina; Albert, Juliane; Schäfer, Lena; Reinhardt, Renate; Kaiser, Gernot M; McDougall, Ian; Decker, Brad; Plicht, Björn; Erbel, Raimund; Eggebrecht, Holger; Ladd, Mark E; Quick, Harald H

    2012-03-27

    Real-time cardiovascular magnetic resonance (rtCMR) is considered attractive for guiding TAVI. Owing to an unlimited scan plane orientation and an unsurpassed soft-tissue contrast with simultaneous device visualization, rtCMR is presumed to allow safe device navigation and to offer optimal orientation for precise axial positioning. We sought to evaluate the preclinical feasibility of rtCMR-guided transarterial aortic valve implatation (TAVI) using the nitinol-based Medtronic CoreValve bioprosthesis. rtCMR-guided transfemoral (n = 2) and transsubclavian (n = 6) TAVI was performed in 8 swine using the original CoreValve prosthesis and a modified, CMR-compatible delivery catheter without ferromagnetic components. rtCMR using TrueFISP sequences provided reliable imaging guidance during TAVI, which was successful in 6 swine. One transfemoral attempt failed due to unsuccessful aortic arch passage and one pericardial tamponade with subsequent death occurred as a result of ventricular perforation by the device tip due to an operating error, this complication being detected without delay by rtCMR. rtCMR allowed for a detailed, simultaneous visualization of the delivery system with the mounted stent-valve and the surrounding anatomy, resulting in improved visualization during navigation through the vasculature, passage of the aortic valve, and during placement and deployment of the stent-valve. Post-interventional success could be confirmed using ECG-triggered time-resolved cine-TrueFISP and flow-sensitive phase-contrast sequences. Intended valve position was confirmed by ex-vivo histology. Our study shows that rtCMR-guided TAVI using the commercial CoreValve prosthesis in conjunction with a modified delivery system is feasible in swine, allowing improved procedural guidance including immediate detection of complications and direct functional assessment with reduction of radiation and omission of contrast media.

  15. Towards real-time cardiovascular magnetic resonance guided transarterial CoreValve implantation: in vivo evaluation in swine

    Directory of Open Access Journals (Sweden)

    Kahlert Philipp

    2012-03-01

    Full Text Available Abstract Background Real-time cardiovascular magnetic resonance (rtCMR is considered attractive for guiding TAVI. Owing to an unlimited scan plane orientation and an unsurpassed soft-tissue contrast with simultaneous device visualization, rtCMR is presumed to allow safe device navigation and to offer optimal orientation for precise axial positioning. We sought to evaluate the preclinical feasibility of rtCMR-guided transarterial aortic valve implatation (TAVI using the nitinol-based Medtronic CoreValve bioprosthesis. Methods rtCMR-guided transfemoral (n = 2 and transsubclavian (n = 6 TAVI was performed in 8 swine using the original CoreValve prosthesis and a modified, CMR-compatible delivery catheter without ferromagnetic components. Results rtCMR using TrueFISP sequences provided reliable imaging guidance during TAVI, which was successful in 6 swine. One transfemoral attempt failed due to unsuccessful aortic arch passage and one pericardial tamponade with subsequent death occurred as a result of ventricular perforation by the device tip due to an operating error, this complication being detected without delay by rtCMR. rtCMR allowed for a detailed, simultaneous visualization of the delivery system with the mounted stent-valve and the surrounding anatomy, resulting in improved visualization during navigation through the vasculature, passage of the aortic valve, and during placement and deployment of the stent-valve. Post-interventional success could be confirmed using ECG-triggered time-resolved cine-TrueFISP and flow-sensitive phase-contrast sequences. Intended valve position was confirmed by ex-vivo histology. Conclusions Our study shows that rtCMR-guided TAVI using the commercial CoreValve prosthesis in conjunction with a modified delivery system is feasible in swine, allowing improved procedural guidance including immediate detection of complications and direct functional assessment with reduction of radiation and omission of contrast media.

  16. Real-time depth processing for embedded platforms

    Science.gov (United States)

    Rahnama, Oscar; Makarov, Aleksej; Torr, Philip

    2017-05-01

    Obtaining depth information of a scene is an important requirement in many computer-vision and robotics applications. For embedded platforms, passive stereo systems have many advantages over their active counterparts (i.e. LiDAR, Infrared). They are power efficient, cheap, robust to lighting conditions and inherently synchronized to the RGB images of the scene. However, stereo depth estimation is a computationally expensive task that operates over large amounts of data. For embedded applications which are often constrained by power consumption, obtaining accurate results in real-time is a challenge. We demonstrate a computationally and memory efficient implementation of a stereo block-matching algorithm in FPGA. The computational core achieves a throughput of 577 fps at standard VGA resolution whilst consuming less than 3 Watts of power. The data is processed using an in-stream approach that minimizes memory-access bottlenecks and best matches the raster scan readout of modern digital image sensors.

  17. Intraoperative registered transrectal ultrasound guidance for robot-assisted laparoscopic radical prostatectomy.

    Science.gov (United States)

    Mohareri, Omid; Ischia, Joseph; Black, Peter C; Schneider, Caitlin; Lobo, Julio; Goldenberg, Larry; Salcudean, Septimiu E

    2015-01-01

    To provide unencumbered real-time ultrasound image guidance during robot-assisted laparoscopic radical prostatectomy, we developed a robotic transrectal ultrasound system that tracks the da Vinci® Surgical System instruments. We describe our initial clinical experience with this system. After an evaluation in a canine model, 20 patients were enrolled in the study. During each procedure the transrectal ultrasound transducer was manually positioned using a brachytherapy stabilizer to provide good imaging of the prostate. Then the transrectal ultrasound was registered to the da Vinci robot by a previously validated procedure. Finally, automatic rotation of the transrectal ultrasound was enabled such that the transrectal ultrasound imaging plane safely tracked the tip of the da Vinci instrument controlled by the surgeon, while real-time transrectal ultrasound images were relayed to the surgeon at the da Vinci console. Tracking was activated during all critical stages of the surgery. The transrectal ultrasound robot was easy to set up and use, adding 7 minutes (range 5 to 14) to the procedure. It did not require an assistant or additional control devices. Qualitative feedback was acquired from the surgeons, who found transrectal ultrasound useful in identifying the urethra while passing the dorsal venous complex suture, defining the prostate-bladder interface during bladder neck dissection, identifying the seminal vesicles and their location with respect to the rectal wall, and identifying the distal prostate boundary at the apex. Real-time, registered robotic transrectal ultrasound guidance with automatic instrument tracking during robot-assisted laparoscopic radical prostatectomy is feasible and potentially useful. The results justify further studies to establish whether the approach can improve procedure outcomes. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  18. Development of a spherically focused phased array transducer for ultrasonic image-guided hyperthermia

    OpenAIRE

    Liu, Jingfei; Foiret, Josquin; Stephens, Douglas N.; Le Baron, Olivier; Ferrara, Katherine W.

    2016-01-01

    A 1.5 MHz prolate spheroidal therapeutic array with 128 circular elements was designed to accommodate standard imaging arrays for ultrasonic image-guided hyperthermia. The implementation of this dual-array system integrates real-time therapeutic and imaging functions with a single ultrasound system (Vantage 256, Verasonics). To facilitate applications involving small animal imaging and therapy the array was designed to have a beam depth of field smaller than 3.5 mm and to electronically steer...

  19. ROS-IGTL-Bridge: an open network interface for image-guided therapy using the ROS environment.

    Science.gov (United States)

    Frank, Tobias; Krieger, Axel; Leonard, Simon; Patel, Niravkumar A; Tokuda, Junichi

    2017-08-01

    With the growing interest in advanced image-guidance for surgical robot systems, rapid integration and testing of robotic devices and medical image computing software are becoming essential in the research and development. Maximizing the use of existing engineering resources built on widely accepted platforms in different fields, such as robot operating system (ROS) in robotics and 3D Slicer in medical image computing could simplify these tasks. We propose a new open network bridge interface integrated in ROS to ensure seamless cross-platform data sharing. A ROS node named ROS-IGTL-Bridge was implemented. It establishes a TCP/IP network connection between the ROS environment and external medical image computing software using the OpenIGTLink protocol. The node exports ROS messages to the external software over the network and vice versa simultaneously, allowing seamless and transparent data sharing between the ROS-based devices and the medical image computing platforms. Performance tests demonstrated that the bridge could stream transforms, strings, points, and images at 30 fps in both directions successfully. The data transfer latency was bridge could achieve 900 fps for transforms. Additionally, the bridge was demonstrated in two representative systems: a mock image-guided surgical robot setup consisting of 3D slicer, and Lego Mindstorms with ROS as a prototyping and educational platform for IGT research; and the smart tissue autonomous robot surgical setup with 3D Slicer. The study demonstrated that the bridge enabled cross-platform data sharing between ROS and medical image computing software. This will allow rapid and seamless integration of advanced image-based planning/navigation offered by the medical image computing software such as 3D Slicer into ROS-based surgical robot systems.

  20. Aqueous Angiography: Real-Time and Physiologic Aqueous Humor Outflow Imaging.

    Directory of Open Access Journals (Sweden)

    Sindhu Saraswathy

    Full Text Available Trabecular meshwork (TM bypass surgeries attempt to enhance aqueous humor outflow (AHO to lower intraocular pressure (IOP. While TM bypass results are promising, inconsistent success is seen. One hypothesis for this variability rests upon segmental (non-360 degrees uniform AHO. We describe aqueous angiography as a real-time and physiologic AHO imaging technique in model eyes as a way to simulate live AHO imaging.Pig (n = 46 and human (n = 6 enucleated eyes were obtained, orientated based upon inferior oblique insertion, and pre-perfused with balanced salt solution via a Lewicky AC maintainer through a 1mm side-port. Fluorescein (2.5% was introduced intracamerally at 10 or 30 mm Hg. With an angiographer, infrared and fluorescent (486 nm images were acquired. Image processing allowed for collection of pixel information based on intensity or location for statistical analyses. Concurrent OCT was performed, and fixable fluorescent dextrans were introduced into the eye for histological analysis of angiographically active areas.Aqueous angiography yielded high quality images with segmental patterns (p<0.0001; Kruskal-Wallis test. No single quadrant was consistently identified as the primary quadrant of angiographic signal (p = 0.06-0.86; Kruskal-Wallis test. Regions of high proximal signal did not necessarily correlate with regions of high distal signal. Angiographically positive but not negative areas demonstrated intrascleral lumens on OCT images. Aqueous angiography with fluorescent dextrans led to their trapping in AHO pathways.Aqueous angiography is a real-time and physiologic AHO imaging technique in model eyes.

  1. Correction of Visual Perception Based on Neuro-Fuzzy Learning for the Humanoid Robot TEO

    Directory of Open Access Journals (Sweden)

    Juan Hernandez-Vicen

    2018-03-01

    Full Text Available New applications related to robotic manipulation or transportation tasks, with or without physical grasping, are continuously being developed. To perform these activities, the robot takes advantage of different kinds of perceptions. One of the key perceptions in robotics is vision. However, some problems related to image processing makes the application of visual information within robot control algorithms difficult. Camera-based systems have inherent errors that affect the quality and reliability of the information obtained. The need of correcting image distortion slows down image parameter computing, which decreases performance of control algorithms. In this paper, a new approach to correcting several sources of visual distortions on images in only one computing step is proposed. The goal of this system/algorithm is the computation of the tilt angle of an object transported by a robot, minimizing image inherent errors and increasing computing speed. After capturing the image, the computer system extracts the angle using a Fuzzy filter that corrects at the same time all possible distortions, obtaining the real angle in only one processing step. This filter has been developed by the means of Neuro-Fuzzy learning techniques, using datasets with information obtained from real experiments. In this way, the computing time has been decreased and the performance of the application has been improved. The resulting algorithm has been tried out experimentally in robot transportation tasks in the humanoid robot TEO (Task Environment Operator from the University Carlos III of Madrid.

  2. Real-time particle image velocimetry based on FPGA technology;Velocimetria PIV en tiempo real basada en logica programable FPGA

    Energy Technology Data Exchange (ETDEWEB)

    Iriarte Munoz, Jose Miguel [Universidad Nacional de Cuyo, Instituto Balseiro, Centro Atomico Bariloche (Argentina)

    2008-07-01

    Particle image velocimetry (PIV), based on laser sheet, is a method for image processing and calculation of distributed velocity fields.It is well established as a fluid dynamics measurement tool, being applied to liquid, gases and multiphase flows.Images of particles are processed by means of computationally demanding algorithms, what makes its real-time implementation difficult.The most probable displacements are found applying two dimensional cross-correlation function. In this work, we detail how it is possible to achieve real-time visualization of PIV method by designing an adaptive embedded architecture based on FPGA technology.We show first results of a physical field of velocity calculated by this platform system in a real-time approach.;La velocimetria por imagenes de particulas (PIV), basada en plano laser, es una potente herramienta de medicion en dinamica de fluidos, capaz de medir sin grandes errores, un campo de velocidades distribuido en liquidos, gases y flujo multifase.Los altos requerimientos computacionales de los algoritmos PIV dificultan su empleo en tiempo-real.En este trabajo presentamos el diseno de una plataforma basada en tecnologia FPGA para capturar video y procesar en tiempo real el algoritmo de correlacion cruzada bidimensional.Mostramos resultados de un primer abordaje de la captura de imagenes y procesamiento de un campo fisico de velocidades en tiempo real.

  3. SU-E-J-181: Magnetic Resonance Image-Guided Radiation Therapy Workflow: Initial Clinical Experience

    International Nuclear Information System (INIS)

    Green, O; Kashani, R; Santanam, L; Wooten, H; Li, H; Rodriguez, V; Hu, Y; Mutic, S; Hand, T; Victoria, J; Steele, C

    2014-01-01

    Purpose: The aims of this work are to describe the workflow and initial clinical experience treating patients with an MRI-guided radiotherapy (MRIGRT) system. Methods: Patient treatments with a novel MR-IGRT system started at our institution in mid-January. The system consists of an on-board 0.35-T MRI, with IMRT-capable delivery via doubly-focused MLCs on three 60 Co heads. In addition to volumetric MR-imaging, real-time planar imaging is performed during treatment. So far, eleven patients started treatment (six finished), ranging from bladder to lung SBRT. While the system is capable of online adaptive radiotherapy and gating, a conventional workflow was used to start, consisting of volumetric imaging for patient setup using visible tumor, evaluation of tumor motion outside of PTV on cine images, and real-time imaging. Workflow times were collected and evaluated to increase efficiency and evaluate feasibility of adding the adaptive and gating features while maintaining a reasonable patient throughput. Results: For the first month, physicians attended every fraction to provide guidance on identifying the tumor and an acceptable level of positioning and anatomical deviation. Average total treatment times (including setup) were reduced from 55 to 45 min after physician presence was no longer required and the therapists had learned to align patients based on soft-tissue imaging. Presently, the source strengths were at half maximum (7.7K Ci each), therefore beam-on times will be reduced after source replacement. Current patient load is 10 per day, with increase to 25 anticipated in the near future. Conclusion: On-board, real-time MRI-guided RT has been incorporated into clinical use. Treatment times were kept to reasonable lengths while including volumetric imaging, previews of tumor movement, and physician evaluation. Workflow and timing is being continuously evaluated to increase efficiency. In near future, adaptive and gating capabilities of the system will be

  4. Real-time detection of natural objects using AM-coded spectral matching imager

    Science.gov (United States)

    Kimachi, Akira

    2005-01-01

    This paper describes application of the amplitude-modulation (AM)-coded spectral matching imager (SMI) to real-time detection of natural objects such as human beings, animals, vegetables, or geological objects or phenomena, which are much more liable to change with time than artificial products while often exhibiting characteristic spectral functions associated with some specific activity states. The AM-SMI produces correlation between spectral functions of the object and a reference at each pixel of the correlation image sensor (CIS) in every frame, based on orthogonal amplitude modulation (AM) of each spectral channel and simultaneous demodulation of all channels on the CIS. This principle makes the SMI suitable to monitoring dynamic behavior of natural objects in real-time by looking at a particular spectral reflectance or transmittance function. A twelve-channel multispectral light source was developed with improved spatial uniformity of spectral irradiance compared to a previous one. Experimental results of spectral matching imaging of human skin and vegetable leaves are demonstrated, as well as a preliminary feasibility test of imaging a reflective object using a test color chart.

  5. A Precise and Real-Time Loop-closure Detection for SLAM Using the RSOM Tree

    Directory of Open Access Journals (Sweden)

    Siyang Song

    2015-06-01

    Full Text Available In robotic applications of visual simultaneous localization and mapping (SLAM techniques, loop-closure detection detects whether or not a current location has previously been visited. We present an online and incremental approach to detect loops when images come from an already visited scene and learn new information from the environment. Instead of utilizing a bag-of-words model, the attributed graph model is applied to represent images and measure the similarity between pairs of images in our method. In order to position a camera in visual environments in real-time, the method demands retrieval of images from the database through a clustering tree that we call RSOM (recursive self-organizing feature map. As long as the match is found between the current graph and several graphs in the database, a threshold will be chosen to judge whether loop-closure is accepted or rejected. The results demonstrate the method's accuracy and real-time performance by testing several videos collected from a digital camera fixed on vehicles in indoor and outdoor environments.

  6. MO-FG-BRD-02: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MV Tracking

    Energy Technology Data Exchange (ETDEWEB)

    Berbeco, R. [Brigham and Women’s Hospital and Dana-Farber Cancer Institute (United States)

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.

  7. MO-FG-BRD-04: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MR Tracking

    Energy Technology Data Exchange (ETDEWEB)

    Low, D. [University of California Los Angeles: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MR Tracking (United States)

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.

  8. MO-FG-BRD-03: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: EM Tracking

    Energy Technology Data Exchange (ETDEWEB)

    Keall, P. [University of Sydney (Australia)

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.

  9. MO-FG-BRD-04: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MR Tracking

    International Nuclear Information System (INIS)

    Low, D.

    2015-01-01

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow

  10. MO-FG-BRD-03: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: EM Tracking

    International Nuclear Information System (INIS)

    Keall, P.

    2015-01-01

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow

  11. MO-FG-BRD-02: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: MV Tracking

    International Nuclear Information System (INIS)

    Berbeco, R.

    2015-01-01

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow

  12. Design of multifunction anti-terrorism robotic system based on police dog

    Science.gov (United States)

    You, Bo; Liu, Suju; Xu, Jun; Li, Dongjie

    2007-11-01

    Aimed at some typical constraints of police dogs and robots used in the areas of reconnaissance and counterterrorism currently, the multifunction anti-terrorism robotic system based on police dog has been introduced. The system is made up of two parts: portable commanding device and police dog robotic system. The portable commanding device consists of power supply module, microprocessor module, LCD display module, wireless data receiving and dispatching module and commanding module, which implements the remote control to the police dogs and takes real time monitor to the video and images. The police dog robotic system consists of microprocessor module, micro video module, wireless data transmission module, power supply module and offence weapon module, which real time collects and transmits video and image data of the counter-terrorism sites, and gives military attack based on commands. The system combines police dogs' biological intelligence with micro robot. Not only does it avoid the complexity of general anti-terrorism robots' mechanical structure and the control algorithm, but it also widens the working scope of police dog, which meets the requirements of anti-terrorism in the new era.

  13. On-orbit real-time robust cooperative target identification in complex background

    Directory of Open Access Journals (Sweden)

    Wen Zhuoman

    2015-10-01

    Full Text Available Cooperative target identification is the prerequisite for the relative position and orientation measurement between the space robot arm and the to-be-arrested object. We propose an on-orbit real-time robust algorithm for cooperative target identification in complex background using the features of circle and lines. It first extracts only the interested edges in the target image using an adaptive threshold and refines them to about single-pixel-width with improved non-maximum suppression. Adapting a novel tracking approach, edge segments changing smoothly in tangential directions are obtained. With a small amount of calculation, large numbers of invalid edges are removed. From the few remained edges, valid circular arcs are extracted and reassembled to obtain circles according to a reliable criterion. Finally, the target is identified if there are certain numbers of straight lines whose relative positions with the circle match the known target pattern. Experiments demonstrate that the proposed algorithm accurately identifies the cooperative target within the range of 0.3–1.5 m under complex background at the speed of 8 frames per second, regardless of lighting condition and target attitude. The proposed algorithm is very suitable for real-time visual measurement of space robot arm because of its robustness and small memory requirement.

  14. Intra-operative fiducial-based CT/fluoroscope image registration framework for image-guided robot-assisted joint fracture surgery.

    Science.gov (United States)

    Dagnino, Giulio; Georgilas, Ioannis; Morad, Samir; Gibbons, Peter; Tarassoli, Payam; Atkins, Roger; Dogramadzi, Sanja

    2017-08-01

    Joint fractures must be accurately reduced minimising soft tissue damages to avoid negative surgical outcomes. To this regard, we have developed the RAFS surgical system, which allows the percutaneous reduction of intra-articular fractures and provides intra-operative real-time 3D image guidance to the surgeon. Earlier experiments showed the effectiveness of the RAFS system on phantoms, but also key issues which precluded its use in a clinical application. This work proposes a redesign of the RAFS's navigation system overcoming the earlier version's issues, aiming to move the RAFS system into a surgical environment. The navigation system is improved through an image registration framework allowing the intra-operative registration between pre-operative CT images and intra-operative fluoroscopic images of a fractured bone using a custom-made fiducial marker. The objective of the registration is to estimate the relative pose between a bone fragment and an orthopaedic manipulation pin inserted into it intra-operatively. The actual pose of the bone fragment can be updated in real time using an optical tracker, enabling the image guidance. Experiments on phantom and cadavers demonstrated the accuracy and reliability of the registration framework, showing a reduction accuracy (sTRE) of about [Formula: see text] (phantom) and [Formula: see text] (cadavers). Four distal femur fractures were successfully reduced in cadaveric specimens using the improved navigation system and the RAFS system following the new clinical workflow (reduction error [Formula: see text], [Formula: see text]. Experiments showed the feasibility of the image registration framework. It was successfully integrated into the navigation system, allowing the use of the RAFS system in a realistic surgical application.

  15. Real-time image processing II; Proceedings of the Meeting, Orlando, FL, Apr. 16-18, 1990

    Science.gov (United States)

    Juday, Richard D. (Editor)

    1990-01-01

    The present conference discusses topics in the fields of feature extraction and implementation, filter and correlation algorithms, optical correlators, high-level algorithms, and digital image processing for ranging and remote driving. Attention is given to a nonlinear filter derived from topological image features, IR image segmentation through iterative thresholding, orthogonal subspaces for correlation masking, composite filter trees and image recognition via binary search, and features of matrix-coherent optical image processing. Also discussed are multitarget tracking via hybrid joint transform correlator, binary joint Fourier transform correlator considerations, global image processing operations on parallel architectures, real-time implementation of a differential range finder, and real-time binocular stereo range and motion detection.

  16. Real-time image processing II; Proceedings of the Meeting, Orlando, FL, Apr. 16-18, 1990

    Science.gov (United States)

    Juday, Richard D.

    The present conference discusses topics in the fields of feature extraction and implementation, filter and correlation algorithms, optical correlators, high-level algorithms, and digital image processing for ranging and remote driving. Attention is given to a nonlinear filter derived from topological image features, IR image segmentation through iterative thresholding, orthogonal subspaces for correlation masking, composite filter trees and image recognition via binary search, and features of matrix-coherent optical image processing. Also discussed are multitarget tracking via hybrid joint transform correlator, binary joint Fourier transform correlator considerations, global image processing operations on parallel architectures, real-time implementation of a differential range finder, and real-time binocular stereo range and motion detection.

  17. Academic Training: Real Time Process Control - Lecture series

    CERN Multimedia

    Françoise Benz

    2004-01-01

    ACADEMIC TRAINING LECTURE REGULAR PROGRAMME 7, 8 and 9 June From 11:00 hrs to 12:00 hrs - Main Auditorium bldg. 500 Real Time Process Control T. Riesco / CERN-TS What exactly is meant by Real-time? There are several definitions of real-time, most of them contradictory. Unfortunately the topic is controversial, and there does not seem to be 100% agreement over the terminology. Real-time applications are becoming increasingly important in our daily lives and can be found in diverse environments such as the automatic braking system on an automobile, a lottery ticket system, or robotic environmental samplers on a space station. These lectures will introduce concepts and theory like basic concepts timing constraints, task scheduling, periodic server mechanisms, hard and soft real-time.ENSEIGNEMENT ACADEMIQUE ACADEMIC TRAINING Françoise Benz 73127 academic.training@cern.ch

  18. Neutron beam applications - A development of real-time imaging processing for neutron radiography

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Whoi Yul; Lee, Sang Yup; Choi, Min Seok; Hwang, Sun Kyu; Han, Il Ho; Jang, Jae Young [Hanyang University, Seoul (Korea)

    1999-08-01

    This research is sponsored and supported by KAERI as a part of {sup A}pplication of Neutron Radiography Beam.{sup M}ain theme of the research is to develop a non-destructive inspection system for the task of studying the real-time behaviour of dynamic motion using neutron beam with the aid of a special purpose real-time image processing system that allows to capture an image of internal structure of a specimen. Currently, most off-the-shelf image processing programs designed for visible light or X-ray are not adequate for the applications that require neutron beam generated by the experimental nuclear reactor. In addition, study of dynamic motion of a specimen is severely constrained by such image processing systems. In this research, a special image processing system suited for such application is developed which not only supplements the commercial image processing system but allows to use neutron beam directly in the system for the study. 18 refs., 21 figs., 1 tab. (Author)

  19. Visual detectability of elastic contrast in real-time ultrasound images

    Science.gov (United States)

    Miller, Naomi R.; Bamber, Jeffery C.; Doyley, Marvin M.; Leach, Martin O.

    1997-04-01

    Elasticity imaging (EI) has recently been proposed as a technique for imaging the mechanical properties of soft tissue. However, dynamic features, known as compressibility and mobility, are already employed to distinguish between different tissue types in ultrasound breast examination. This method, which involves the subjective interpretation of tissue motion seen in real-time B-mode images during palpation, is hereafter referred to as differential motion imaging (DMI). The purpose of this study was to develop the methodology required to perform a series of perception experiments to measure elastic lesion detectability by means of DMI and to obtain preliminary results for elastic contrast thresholds for different lesion sizes. Simulated sequences of real-time B-scans of tissue moving in response to an applied force were generated. A two-alternative forced choice (2-AFC) experiment was conducted and the measured contrast thresholds were compared with published results for lesions detected by EI. Although the trained observer was found to be quite skilled at the task of differential motion perception, it would appear that lesion detectability is improved when motion information is detected by computer processing and converted to gray scale before presentation to the observer. In particular, for lesions containing fewer than eight speckle cells, a signal detection rate of 100% could not be achieved even when the elastic contrast was very high.

  20. Cellular Neural Network for Real Time Image Processing

    International Nuclear Information System (INIS)

    Vagliasindi, G.; Arena, P.; Fortuna, L.; Mazzitelli, G.; Murari, A.

    2008-01-01

    Since their introduction in 1988, Cellular Nonlinear Networks (CNNs) have found a key role as image processing instruments. Thanks to their structure they are able of processing individual pixels in a parallel way providing fast image processing capabilities that has been applied to a wide range of field among which nuclear fusion. In the last years, indeed, visible and infrared video cameras have become more and more important in tokamak fusion experiments for the twofold aim of understanding the physics and monitoring the safety of the operation. Examining the output of these cameras in real-time can provide significant information for plasma control and safety of the machines. The potentiality of CNNs can be exploited to this aim. To demonstrate the feasibility of the approach, CNN image processing has been applied to several tasks both at the Frascati Tokamak Upgrade (FTU) and the Joint European Torus (JET)

  1. A discrete-time adaptive control scheme for robot manipulators

    Science.gov (United States)

    Tarokh, M.

    1990-01-01

    A discrete-time model reference adaptive control scheme is developed for trajectory tracking of robot manipulators. The scheme utilizes feedback, feedforward, and auxiliary signals, obtained from joint angle measurement through simple expressions. Hyperstability theory is utilized to derive the adaptation laws for the controller gain matrices. It is shown that trajectory tracking is achieved despite gross robot parameter variation and uncertainties. The method offers considerable design flexibility and enables the designer to improve the performance of the control system by adjusting free design parameters. The discrete-time adaptation algorithm is extremely simple and is therefore suitable for real-time implementation. Simulations and experimental results are given to demonstrate the performance of the scheme.

  2. Real-time segmentation of multiple implanted cylindrical liver markers in kilovoltage and megavoltage x-ray images

    International Nuclear Information System (INIS)

    Fledelius, W; Worm, E; Høyer, M; Grau, C; Poulsen, P R

    2014-01-01

    Gold markers implanted in or near a tumor can be used as x-ray visible landmarks for image based tumor localization. The aim of this study was to develop and demonstrate fast and reliable real-time segmentation of multiple liver tumor markers in intra-treatment kV and MV images and in cone-beam CT (CBCT) projections, for real-time motion management. Thirteen patients treated with conformal stereotactic body radiation therapy in three fractions had 2–3 cylindrical gold markers implanted in the liver prior to treatment. At each fraction, the projection images of a pre-treatment CBCT scan were used for automatic generation of a 3D marker model that consisted of the size, orientation, and estimated 3D trajectory of each marker during the CBCT scan. The 3D marker model was used for real-time template based segmentation in subsequent x-ray images by projecting each marker's 3D shape and likely 3D motion range onto the imager plane. The segmentation was performed in intra-treatment kV images (526 marker traces, 92 097 marker projections) and MV images (88 marker traces, 22 382 marker projections), and in post-treatment CBCT projections (42 CBCT scans, 71 381 marker projections). 227 kV marker traces with low mean contrast-to-noise ratio were excluded as markers were not visible due to MV scatter. Online segmentation times measured for a limited dataset were used for estimating real-time segmentation times for all images. The percentage of detected markers was 94.8% (kV), 96.1% (MV), and 98.6% (CBCT). For the detected markers, the real-time segmentation was erroneous in 0.2–0.31% of the cases. The mean segmentation time per marker was 5.6 ms [2.1–12 ms] (kV), 5.5 ms [1.6–13 ms] (MV), and 6.5 ms [1.8–15 ms] (CBCT). Fast and reliable real-time segmentation of multiple liver tumor markers in intra-treatment kV and MV images and in CBCT projections was demonstrated for a large dataset. (paper)

  3. Vitruvian Robot

    DEFF Research Database (Denmark)

    Hasse, Cathrine

    2017-01-01

    future. A real version of Ava would not last long in a human world because she is basically a solipsist, who does not really care about humans. She cannot co-create the line humans walk along. The robots created as ‘perfect women’ (sex robots) today are very far from the ideal image of Ava...

  4. In Vivo Real Time Volumetric Synthetic Aperture Ultrasound Imaging

    DEFF Research Database (Denmark)

    Bouzari, Hamed; Rasmussen, Morten Fischer; Brandt, Andreas Hjelm

    2015-01-01

    Synthetic aperture (SA) imaging can be used to achieve real-time volumetric ultrasound imaging using 2-D array transducers. The sensitivity of SA imaging is improved by maximizing the acoustic output, but one must consider the limitations of an ultrasound system, both technical and biological....... This paper investigates the in vivo applicability and sensitivity of volumetric SA imaging. Utilizing the transmit events to generate a set of virtual point sources, a frame rate of 25 Hz for a 90° x 90° field-of-view was achieved. Data were obtained using a 3.5 MHz 32 x 32 elements 2-D phased array...... transducer connected to the experimental scanner (SARUS). Proper scaling is applied to the excitation signal such that intensity levels are in compliance with the U.S. Food and Drug Administration regulations for in vivo ultrasound imaging. The measured Mechanical Index and spatial-peak- temporal...

  5. A flexible software architecture for scalable real-time image and video processing applications

    Science.gov (United States)

    Usamentiaga, Rubén; Molleda, Julio; García, Daniel F.; Bulnes, Francisco G.

    2012-06-01

    Real-time image and video processing applications require skilled architects, and recent trends in the hardware platform make the design and implementation of these applications increasingly complex. Many frameworks and libraries have been proposed or commercialized to simplify the design and tuning of real-time image processing applications. However, they tend to lack flexibility because they are normally oriented towards particular types of applications, or they impose specific data processing models such as the pipeline. Other issues include large memory footprints, difficulty for reuse and inefficient execution on multicore processors. This paper presents a novel software architecture for real-time image and video processing applications which addresses these issues. The architecture is divided into three layers: the platform abstraction layer, the messaging layer, and the application layer. The platform abstraction layer provides a high level application programming interface for the rest of the architecture. The messaging layer provides a message passing interface based on a dynamic publish/subscribe pattern. A topic-based filtering in which messages are published to topics is used to route the messages from the publishers to the subscribers interested in a particular type of messages. The application layer provides a repository for reusable application modules designed for real-time image and video processing applications. These modules, which include acquisition, visualization, communication, user interface and data processing modules, take advantage of the power of other well-known libraries such as OpenCV, Intel IPP, or CUDA. Finally, we present different prototypes and applications to show the possibilities of the proposed architecture.

  6. SU-F-J-54: Towards Real-Time Volumetric Imaging Using the Treatment Beam and KV Beam

    Energy Technology Data Exchange (ETDEWEB)

    Chen, M; Rozario, T; Liu, A; Jiang, S; Lu, W [UT Southwestern Medical Center, Dallas, TX (United States)

    2016-06-15

    Purpose: Existing real-time imaging uses dual (orthogonal) kV beam fluoroscopies and may result in significant amount of extra radiation to patients, especially for prolonged treatment cases. In addition, kV projections only provide 2D information, which is insufficient for in vivo dose reconstruction. We propose real-time volumetric imaging using prior knowledge of pre-treatment 4D images and real-time 2D transit data of treatment beam and kV beam. Methods: The pre-treatment multi-snapshot volumetric images are used to simulate 2D projections of both the treatment beam and kV beam, respectively, for each treatment field defined by the control point. During radiation delivery, the transit signals acquired by the electronic portal image device (EPID) are processed for every projection and compared with pre-calculation by cross-correlation for phase matching and thus 3D snapshot identification or real-time volumetric imaging. The data processing involves taking logarithmic ratios of EPID signals with respect to the air scan to reduce modeling uncertainties in head scatter fluence and EPID response. Simulated 2D projections are also used to pre-calculate confidence levels in phase matching. Treatment beam projections that have a low confidence level either in pre-calculation or real-time acquisition will trigger kV beams so that complementary information can be exploited. In case both the treatment beam and kV beam return low confidence in phase matching, a predicted phase based on linear regression will be generated. Results: Simulation studies indicated treatment beams provide sufficient confidence in phase matching for most cases. At times of low confidence from treatment beams, kV imaging provides sufficient confidence in phase matching due to its complementary configuration. Conclusion: The proposed real-time volumetric imaging utilizes the treatment beam and triggers kV beams for complementary information when the treatment beam along does not provide sufficient

  7. Real-time near IR (1310 nm) imaging of CO2 laser ablation of enamel.

    Science.gov (United States)

    Darling, Cynthia L; Fried, Daniel

    2008-02-18

    The high-transparency of dental enamel in the near-IR (NIR) can be exploited for real-time imaging of ablation crater formation during drilling with lasers. NIR images were acquired with an InGaAs focal plane array and a NIR zoom microscope during drilling incisions in human enamel samples with a lambda=9.3-microm CO(2) laser operating at repetition rates of 50-300-Hz with and without a water spray. Crack formation, dehydration and thermal changes were observed during ablation. These initial images demonstrate the potential of NIR imaging to monitor laser-ablation events in real-time to provide information about the mechanism of ablation and to evaluate the potential for peripheral thermal and mechanical damage.

  8. Real Time Robot Soccer Game Event Detection Using Finite State Machines with Multiple Fuzzy Logic Probability Evaluators

    Directory of Open Access Journals (Sweden)

    Elmer P. Dadios

    2009-01-01

    Full Text Available This paper presents a new algorithm for real time event detection using Finite State Machines with multiple Fuzzy Logic Probability Evaluators (FLPEs. A machine referee for a robot soccer game is developed and is used as the platform to test the proposed algorithm. A novel technique to detect collisions and other events in microrobot soccer game under inaccurate and insufficient information is presented. The robots' collision is used to determine goalkeeper charging and goal score events which are crucial for the machine referee's decisions. The Main State Machine (MSM handles the schedule of event activation. The FLPE calculates the probabilities of the true occurrence of the events. Final decisions about the occurrences of events are evaluated and compared through threshold crisp probability values. The outputs of FLPEs can be combined to calculate the probability of an event composed of subevents. Using multiple fuzzy logic system, the FLPE utilizes minimal number of rules and can be tuned individually. Experimental results show the accuracy and robustness of the proposed algorithm.

  9. Potential Applications of Light Robotics in Nanomedicine

    DEFF Research Database (Denmark)

    Glückstad, Jesper

    We have recently pioneered a new generation of 3D micro-printed light robotic structures with multi-functional biophotonics capabilities. The uniqueness of this light robotic approach is that even if a micro-biologist aims at exploring e.g. cell biology at nanoscopic scales, the main support...... of each micro-robotic structure can be 3D printed to have a size and shape that allows convenient laser manipulation in full 3D – even using relatively modest numerical aperture optics. An optical robot is typically equipped with a number of 3D printed "track-balls" that allow for real-time 3D light...... manipulation with six-degrees-of-freedom. This creates a drone-like functionality where each light-driven robot can be e.g. joystick-controlled and provide the user a feeling of stretching his/her hands directly into and interacting with the biologic micro-environment. The light-guided robots can thus act...

  10. SRAO: the first southern robotic AO system

    Science.gov (United States)

    Law, Nicholas M.; Ziegler, Carl; Tokovinin, Andrei

    2016-08-01

    We present plans for SRAO, the first Southern Robotic AO system. SRAO will use AO-assisted speckle imaging and Robo-AO-heritage high efficiency observing to confirm and characterize thousands of planet candidates produced by major new transit surveys like TESS, and is the first AO system to be capable of building a comprehensive several-thousand-target multiplicity survey at sub-AU scales across the main sequence. We will also describe results from Robo-AO, the first robotic LGS-AO system. Robo-AO has observed tens of thousands of Northern targets, often using a similar speckle or Lucky-Imaging assisted mode. SRAO will be a moderate-order natural-guide-star adaptive optics system which uses an innovative photoncounting wavefront sensor and EMCCD speckle-imaging camera to guide on faint stars with the 4.1m SOAR telescope. The system will produce diffraction-limited imaging in the NIR on targets as faint as mν = 16. In AO-assisted speckle imaging mode the system will attain the 30-mas visible diffraction limit on targets at least as faint as mν = 17. The system will be the first Southern hemisphere robotic adaptive optics system, with overheads an order of magnitude smaller than comparable systems. Using Robo-AO's proven robotic AO software, SRAO will be capable of observing overheads on sub-minute scales, allowing the observation of at least 200 targets per night. SRAO will attain three times the angular resolution of the Palomar Robo-AO system in the visible.

  11. Exploiting Microwave Imaging Methods for Real-Time Monitoring of Thermal Ablation

    Directory of Open Access Journals (Sweden)

    Rosa Scapaticci

    2017-01-01

    Full Text Available Microwave thermal ablation is a cancer treatment that exploits local heating caused by a microwave electromagnetic field to induce coagulative necrosis of tumor cells. Recently, such a technique has significantly progressed in the clinical practice. However, its effectiveness would dramatically improve if paired with a noninvasive system for the real-time monitoring of the evolving dimension and shape of the thermally ablated area. In this respect, microwave imaging can be a potential candidate to monitor the overall treatment evolution in a noninvasive way, as it takes direct advantage from the dependence of the electromagnetic properties of biological tissues from temperature. This paper explores such a possibility by presenting a proof of concept validation based on accurate simulated imaging experiments, run with respect to a scenario that mimics an ex vivo experimental setup. In particular, two model-based inversion algorithms are exploited to tackle the imaging task. These methods provide independent results in real-time and their integration improves the quality of the overall tracking of the variations occurring in the target and surrounding regions.

  12. Ultrasound/Magnetic Resonance Image Fusion Guided Lumbosacral Plexus Block – A Clinical Study

    DEFF Research Database (Denmark)

    Strid, JM; Pedersen, Erik Morre; Søballe, Kjeld

    2014-01-01

    in a double-blinded randomized controlled trial with crossover design. MR datasets will be acquired and uploaded in an advanced US system (Epiq7, Phillips, Amsterdam, Netherlands). All volunteers will receive SSPS blocks with lidocaine added gadolinium contrast guided by US/MR image fusion and by US one week......Background and aims Ultrasound (US) guided lumbosacral plexus block (Supra Sacral Parallel Shift [SSPS]) offers an alternative to general anaesthesia and perioperative analgesia for hip surgery.1 The complex anatomy of the lumbosacral region hampers the accuracy of the block, but it may be improved...... by guidance of US and magnetic resonance (MR) image fusion and real-time 3D electronic needle tip tracking.2 We aim to estimate the effect and the distribution of lidocaine after SSPS guided by US/MR image fusion compared to SSPS guided by ultrasound. Methods Twenty-four healthy volunteers will be included...

  13. PixonVision real-time video processor

    Science.gov (United States)

    Puetter, R. C.; Hier, R. G.

    2007-09-01

    PixonImaging LLC and DigiVision, Inc. have developed a real-time video processor, the PixonVision PV-200, based on the patented Pixon method for image deblurring and denoising, and DigiVision's spatially adaptive contrast enhancement processor, the DV1000. The PV-200 can process NTSC and PAL video in real time with a latency of 1 field (1/60 th of a second), remove the effects of aerosol scattering from haze, mist, smoke, and dust, improve spatial resolution by up to 2x, decrease noise by up to 6x, and increase local contrast by up to 8x. A newer version of the processor, the PV-300, is now in prototype form and can handle high definition video. Both the PV-200 and PV-300 are FPGA-based processors, which could be spun into ASICs if desired. Obvious applications of these processors include applications in the DOD (tanks, aircraft, and ships), homeland security, intelligence, surveillance, and law enforcement. If developed into an ASIC, these processors will be suitable for a variety of portable applications, including gun sights, night vision goggles, binoculars, and guided munitions. This paper presents a variety of examples of PV-200 processing, including examples appropriate to border security, battlefield applications, port security, and surveillance from unmanned aerial vehicles.

  14. Robot-assisted intracerebral hemorrhage evacuation: an experimental evaluation

    Science.gov (United States)

    Burgner, Jessica; Swaney, Philip J.; Lathrop, Ray A.; Weaver, Kyle D.; Webster, Robert J.

    2013-03-01

    We present a novel robotic approach for the rapid, minimally invasive treatment of Intracerebral Hemorrhage (ICH), in which a hematoma or blood clot arises in the brain parenchyma. We present a custom image-guided robot system that delivers a steerable cannula into the lesion and aspirates it from the inside. The steerable cannula consists of an initial straight tube delivered in a manner similar to image-guided biopsy (and which uses a commercial image guidance system), followed by the sequential deployment of multiple individual precurved elastic tubes. Rather than deploying the tubes simultaneously, as has been done in nearly all prior studies, we deploy the tubes one at a time, using a compilation of their individual workspaces to reach desired points inside the lesion. This represents a new paradigm in active cannula research, defining a novel procedure-planning problem. A design that solves this problem can potentially save many lives by enabling brain decompression both more rapidly and less invasively than is possible through the traditional open surgery approach. Experimental results include a comparison of the simulated and actual workspaces of the prototype robot, and an accuracy evaluation of the system.

  15. Time response for sensor sensed to actuator response for mobile robotic system

    Science.gov (United States)

    Amir, N. S.; Shafie, A. A.

    2017-11-01

    Time and performance of a mobile robot are very important in completing the tasks given to achieve its ultimate goal. Tasks may need to be done within a time constraint to ensure smooth operation of a mobile robot and can result in better performance. The main purpose of this research was to improve the performance of a mobile robot so that it can complete the tasks given within time constraint. The problem that is needed to be solved is to minimize the time interval between sensor detection and actuator response. The research objective is to analyse the real time operating system performance of sensors and actuators on one microcontroller and on two microcontroller for a mobile robot. The task for a mobile robot for this research is line following with an obstacle avoidance. Three runs will be carried out for the task and the time between the sensors senses to the actuator responses were recorded. Overall, the results show that two microcontroller system have better response time compared to the one microcontroller system. For this research, the average difference of response time is very important to improve the internal performance between the occurrence of a task, sensors detection, decision making and actuator response of a mobile robot. This research helped to develop a mobile robot with a better performance and can complete task within the time constraint.

  16. Method and apparatus for real time imaging and monitoring of radiotherapy beams

    Science.gov (United States)

    Majewski, Stanislaw [Yorktown, VA; Proffitt, James [Newport News, VA; Macey, Daniel J [Birmingham, AL; Weisenberger, Andrew G [Yorktown, VA

    2011-11-01

    A method and apparatus for real time imaging and monitoring of radiation therapy beams is designed to preferentially distinguish and image low energy radiation from high energy secondary radiation emitted from a target as the result of therapeutic beam deposition. A detector having low sensitivity to high energy photons combined with a collimator designed to dynamically image in the region of the therapeutic beam target is used.

  17. Kinematic analysis and simulation of a substation inspection robot guided by magnetic sensor

    Science.gov (United States)

    Xiao, Peng; Luan, Yiqing; Wang, Haipeng; Li, Li; Li, Jianxiang

    2017-01-01

    In order to improve the performance of the magnetic navigation system used by substation inspection robot, the kinematic characteristics is analyzed based on a simplified magnetic guiding system model, and then the simulation process is executed to verify the reasonability of the whole analysis procedure. Finally, some suggestions are extracted out, which will be helpful to guide the design of the inspection robot system in the future.

  18. Navigation and Robotics in Spinal Surgery: Where Are We Now?

    Science.gov (United States)

    Overley, Samuel C; Cho, Samuel K; Mehta, Ankit I; Arnold, Paul M

    2017-03-01

    Spine surgery has experienced much technological innovation over the past several decades. The field has seen advancements in operative techniques, implants and biologics, and equipment such as computer-assisted navigation and surgical robotics. With the arrival of real-time image guidance and navigation capabilities along with the computing ability to process and reconstruct these data into an interactive three-dimensional spinal "map", so too have the applications of surgical robotic technology. While spinal robotics and navigation represent promising potential for improving modern spinal surgery, it remains paramount to demonstrate its superiority as compared to traditional techniques prior to assimilation of its use amongst surgeons.The applications for intraoperative navigation and image-guided robotics have expanded to surgical resection of spinal column and intradural tumors, revision procedures on arthrodesed spines, and deformity cases with distorted anatomy. Additionally, these platforms may mitigate much of the harmful radiation exposure in minimally invasive surgery to which the patient, surgeon, and ancillary operating room staff are subjected.Spine surgery relies upon meticulous fine motor skills to manipulate neural elements and a steady hand while doing so, often exploiting small working corridors utilizing exposures that minimize collateral damage. Additionally, the procedures may be long and arduous, predisposing the surgeon to both mental and physical fatigue. In light of these characteristics, spine surgery may actually be an ideal candidate for the integration of navigation and robotic-assisted procedures.With this paper, we aim to critically evaluate the current literature and explore the options available for intraoperative navigation and robotic-assisted spine surgery. Copyright © 2016 by the Congress of Neurological Surgeons.

  19. Functional real-time optoacoustic imaging of middle cerebral artery occlusion in mice.

    Directory of Open Access Journals (Sweden)

    Moritz Kneipp

    Full Text Available BACKGROUND AND PURPOSE: Longitudinal functional imaging studies of stroke are key in identifying the disease progression and possible therapeutic interventions. Here we investigate the applicability of real-time functional optoacoustic imaging for monitoring of stroke progression in the whole brain of living animals. MATERIALS AND METHODS: The middle cerebral artery occlusion (MCAO was used to model stroke in mice, which were imaged preoperatively and the occlusion was kept in place for 60 minutes, after which optoacoustic scans were taken at several time points. RESULTS: Post ischemia an asymmetry of deoxygenated hemoglobin in the brain was observed as a region of hypoxia in the hemisphere affected by the ischemic event. Furthermore, we were able to visualize the penumbra in-vivo as a localized hemodynamically-compromised area adjacent to the region of stroke-induced perfusion deficit. CONCLUSION: The intrinsic sensitivity of the new imaging approach to functional blood parameters, in combination with real time operation and high spatial resolution in deep living tissues, may see it become a valuable and unique tool in the development and monitoring of treatments aimed at suspending the spread of an infarct area.

  20. Rapidly-steered single-element ultrasound for real-time volumetric imaging and guidance

    Science.gov (United States)

    Stauber, Mark; Western, Craig; Solek, Roman; Salisbury, Kenneth; Hristov, Dmitre; Schlosser, Jeffrey

    2016-03-01

    Volumetric ultrasound (US) imaging has the potential to provide real-time anatomical imaging with high soft-tissue contrast in a variety of diagnostic and therapeutic guidance applications. However, existing volumetric US machines utilize "wobbling" linear phased array or matrix phased array transducers which are costly to manufacture and necessitate bulky external processing units. To drastically reduce cost, improve portability, and reduce footprint, we propose a rapidly-steered single-element volumetric US imaging system. In this paper we explore the feasibility of this system with a proof-of-concept single-element volumetric US imaging device. The device uses a multi-directional raster-scan technique to generate a series of two-dimensional (2D) slices that were reconstructed into three-dimensional (3D) volumes. At 15 cm depth, 90° lateral field of view (FOV), and 20° elevation FOV, the device produced 20-slice volumes at a rate of 0.8 Hz. Imaging performance was evaluated using an US phantom. Spatial resolution was 2.0 mm, 4.7 mm, and 5.0 mm in the axial, lateral, and elevational directions at 7.5 cm. Relative motion of phantom targets were automatically tracked within US volumes with a mean error of -0.3+/-0.3 mm, -0.3+/-0.3 mm, and -0.1+/-0.5 mm in the axial, lateral, and elevational directions, respectively. The device exhibited a mean spatial distortion error of 0.3+/-0.9 mm, 0.4+/-0.7 mm, and -0.3+/-1.9 in the axial, lateral, and elevational directions. With a production cost near $1000, the performance characteristics of the proposed system make it an ideal candidate for diagnostic and image-guided therapy applications where form factor and low cost are paramount.

  1. High Resolution Near Real Time Image Processing and Support for MSSS Modernization

    Science.gov (United States)

    Duncan, R. B.; Sabol, C.; Borelli, K.; Spetka, S.; Addison, J.; Mallo, A.; Farnsworth, B.; Viloria, R.

    2012-09-01

    This paper describes image enhancement software applications engineering development work that has been performed in support of Maui Space Surveillance System (MSSS) Modernization. It also includes R&D and transition activity that has been performed over the past few years with the objective of providing increased space situational awareness (SSA) capabilities. This includes Air Force Research Laboratory (AFRL) use of an FY10 Dedicated High Performance Investment (DHPI) cluster award -- and our selection and planned use for an FY12 DHPI award. We provide an introduction to image processing of electro optical (EO) telescope sensors data; and a high resolution image enhancement and near real time processing and summary status overview. We then describe recent image enhancement applications development and support for MSSS Modernization, results to date, and end with a discussion of desired future development work and conclusions. Significant improvements to image processing enhancement have been realized over the past several years, including a key application that has realized more than a 10,000-times speedup compared to the original R&D code -- and a greater than 72-times speedup over the past few years. The latest version of this code maintains software efficiency for post-mission processing while providing optimization for image processing of data from a new EO sensor at MSSS. Additional work has also been performed to develop low latency, near real time processing of data that is collected by the ground-based sensor during overhead passes of space objects.

  2. An image scanner for real time analysis of spark chamber images

    International Nuclear Information System (INIS)

    Cesaroni, F.; Penso, G.; Locci, A.M.; Spano, M.A.

    1975-01-01

    The notes describes the semiautomatic scanning system at LNF for the analysis of spark chamber images. From the projection of the images on the scanner table, the trajectory in the real space is reconstructed

  3. Simulation Study of Real Time 3-D Synthetic Aperture Sequential Beamforming for Ultrasound Imaging

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian; Rasmussen, Morten Fischer; Stuart, Matthias Bo

    2014-01-01

    in the main system. The real-time imaging capability is achieved using a synthetic aperture beamforming technique, utilizing the transmit events to generate a set of virtual elements that in combination can generate an image. The two core capabilities in combination is named Synthetic Aperture Sequential......This paper presents a new beamforming method for real-time three-dimensional (3-D) ultrasound imaging using a 2-D matrix transducer. To obtain images with sufficient resolution and contrast, several thousand elements are needed. The proposed method reduces the required channel count from...... Beamforming (SASB). Simulations are performed to evaluate the image quality of the presented method in comparison to Parallel beamforming utilizing 16 receive beamformers. As indicators for image quality the detail resolution and Cystic resolution are determined for a set of scatterers at a depth of 90mm...

  4. Mid-level image representations for real-time heart view plane classification of echocardiograms.

    Science.gov (United States)

    Penatti, Otávio A B; Werneck, Rafael de O; de Almeida, Waldir R; Stein, Bernardo V; Pazinato, Daniel V; Mendes Júnior, Pedro R; Torres, Ricardo da S; Rocha, Anderson

    2015-11-01

    In this paper, we explore mid-level image representations for real-time heart view plane classification of 2D echocardiogram ultrasound images. The proposed representations rely on bags of visual words, successfully used by the computer vision community in visual recognition problems. An important element of the proposed representations is the image sampling with large regions, drastically reducing the execution time of the image characterization procedure. Throughout an extensive set of experiments, we evaluate the proposed approach against different image descriptors for classifying four heart view planes. The results show that our approach is effective and efficient for the target problem, making it suitable for use in real-time setups. The proposed representations are also robust to different image transformations, e.g., downsampling, noise filtering, and different machine learning classifiers, keeping classification accuracy above 90%. Feature extraction can be performed in 30 fps or 60 fps in some cases. This paper also includes an in-depth review of the literature in the area of automatic echocardiogram view classification giving the reader a through comprehension of this field of study. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Towards cybernetic surgery: robotic and augmented reality-assisted liver segmentectomy.

    Science.gov (United States)

    Pessaux, Patrick; Diana, Michele; Soler, Luc; Piardi, Tullio; Mutter, Didier; Marescaux, Jacques

    2015-04-01

    Augmented reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative workup and real-time patient images in order to visualize unapparent anatomical details. The 3D model could be used for a preoperative planning of the procedure. The potential of AR navigation as a tool to improve safety of the surgical dissection is outlined for robotic hepatectomy. Three patients underwent a fully robotic and AR-assisted hepatic segmentectomy. The 3D virtual anatomical model was obtained using a thoracoabdominal CT scan with a customary software (VR-RENDER®, IRCAD). The model was then processed using a VR-RENDER® plug-in application, the Virtual Surgical Planning (VSP®, IRCAD), to delineate surgical resection planes including the elective ligature of vascular structures. Deformations associated with pneumoperitoneum were also simulated. The virtual model was superimposed to the operative field. A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Two totally robotic AR segmentectomy V and one segmentectomy VI were performed. AR allowed for the precise and safe recognition of all major vascular structures during the procedure. Total time required to obtain AR was 8 min (range 6-10 min). Each registration (alignment of the vascular anatomy) required a few seconds. Hepatic pedicle clamping was never performed. At the end of the procedure, the remnant liver was correctly vascularized. Resection margins were negative in all cases. The postoperative period was uneventful without perioperative transfusion. AR is a valuable navigation tool which may enhance the ability to achieve safe surgical resection during robotic hepatectomy.

  6. LabVIEW A Developer's Guide to Real World Integration

    CERN Document Server

    Fairweather, Ian

    2011-01-01

    LabVIEW(t) has become one of the preeminent platforms for the development of data acquisition and data analysis programs. LabVIEW(t): A Developer's Guide to Real World Integration explains how to integrate LabVIEW into real-life applications. Written by experienced LabVIEW developers and engineers, the book describes how LabVIEW has been pivotal in solving real-world challenges. Each chapter is self-contained and demonstrates the power and simplicity of LabVIEW in various applications, from image processing to solar tracking systems. Many of the chapters explore how exciting new technologies c

  7. Real-time emulation of neural images in the outer retinal circuit.

    Science.gov (United States)

    Hasegawa, Jun; Yagi, Tetsuya

    2008-12-01

    We describe a novel real-time system that emulates the architecture and functionality of the vertebrate retina. This system reconstructs the neural images formed by the retinal neurons in real time by using a combination of analog and digital systems consisting of a neuromorphic silicon retina chip, a field-programmable gate array, and a digital computer. While the silicon retina carries out the spatial filtering of input images instantaneously, using the embedded resistive networks that emulate the receptive field structure of the outer retinal neurons, the digital computer carries out the temporal filtering of the spatially filtered images to emulate the dynamical properties of the outer retinal circuits. The emulations of the neural image, including 128 x 128 bipolar cells, are carried out at a frame rate of 62.5 Hz. The emulation of the response to the Hermann grid and a spot of light and an annulus of lights has demonstrated that the system responds as expected by previous physiological and psychophysical observations. Furthermore, the emulated dynamics of neural images in response to natural scenes revealed the complex nature of retinal neuron activity. We have concluded that the system reflects the spatiotemporal responses of bipolar cells in the vertebrate retina. The proposed emulation system is expected to aid in understanding the visual computation in the retina and the brain.

  8. Diffraction-limited real-time terahertz imaging by optical frequency up-conversion in a DAST crystal.

    Science.gov (United States)

    Fan, Shuzhen; Qi, Feng; Notake, Takashi; Nawata, Kouji; Takida, Yuma; Matsukawa, Takeshi; Minamide, Hiroaki

    2015-03-23

    Real-time terahertz (THz) wave imaging has wide applications in areas such as security, industry, biology, medicine, pharmacy, and the arts. This report describes real-time room-temperature THz imaging by nonlinear optical frequency up-conversion in an organic 4-dimethylamino-N'-methyl-4'-stilbazolium tosylate (DAST) crystal, with high resolution reaching the diffraction limit. THz-wave images were converted to the near infrared region and then captured using an InGaAs camera in a tandem imaging system. The resolution of the imaging system was analyzed. Diffraction and interference of THz wave were observed in the experiments. Videos are supplied to show the interference pattern variation that occurs with sample moving and tilting.

  9. A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer

    Science.gov (United States)

    Luckman, Adrian J.; Allinson, Nigel M.

    1989-03-01

    A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.

  10. Real-time UV imaging of nicotin release from transdermal patch

    DEFF Research Database (Denmark)

    Østergaard, Jesper; Meng-Lund, Emil; Larsen, Susan Weng

    2010-01-01

    PURPOSE: This study was conducted to characterize UV imaging as a platform for performing in vitro release studies using Nicorette® nicotine patches as a model drug delivery system. METHODS: The rate of nicotine release from 2 mm diameter patch samples (Nicorette®) into 0.067 M phosphate buffer, p......H 7.40, was studied by UV imaging (Actipix SDI300 dissolution imaging system) at 254 nm. The release rates were compared to those obtained using the paddle-over-disk method. RESULTS: Calibration curves were successfully established which allowed temporally and spatially resolved quantification...... of nicotine. Release profiles obtained from UV imaging were in qualitative agreement with results from the paddle-over-disk release method. CONCLUSION: Visualization as well as quantification of nicotine concentration gradients was achieved by UV imaging in real time. UV imaging has the potential to become...

  11. Imaging technique for real-time temperature monitoring during cryotherapy of lesions

    Science.gov (United States)

    Petrova, Elena; Liopo, Anton; Nadvoretskiy, Vyacheslav; Ermilov, Sergey

    2016-11-01

    Noninvasive real-time temperature imaging during thermal therapies is able to significantly improve clinical outcomes. An optoacoustic (OA) temperature monitoring method is proposed for noninvasive real-time thermometry of vascularized tissue during cryotherapy. The universal temperature-dependent optoacoustic response (ThOR) of red blood cells (RBCs) is employed to convert reconstructed OA images to temperature maps. To obtain the temperature calibration curve for intensity-normalized OA images, we measured ThOR of 10 porcine blood samples in the range of temperatures from 40°C to -16°C and analyzed the data for single measurement variations. The nonlinearity (ΔTmax) and the temperature of zero OA response (T0) of the calibration curve were found equal to 11.4±0.1°C and -13.8±0.1°C, respectively. The morphology of RBCs was examined before and after the data collection confirming cellular integrity and intracellular compartmentalization of hemoglobin. For temperatures below 0°C, which are of particular interest for cryotherapy, the accuracy of a single temperature measurement was ±1°C, which is consistent with the clinical requirements. Validation of the proposed OA temperature imaging technique was performed for slow and fast cooling of blood samples embedded in tissue-mimicking phantoms.

  12. Using Opaque Image Blur for Real-Time Depth-of-Field Rendering

    DEFF Research Database (Denmark)

    Kraus, Martin

    2011-01-01

    While depth of field is an important cinematographic means, its use in real-time computer graphics is still limited by the computational costs that are necessary to achieve a sufficient image quality. Specifically, color bleeding artifacts between objects at different depths are most effectively...

  13. Efficient Imaging and Real-Time Display of Scanning Ion Conductance Microscopy Based on Block Compressive Sensing

    Science.gov (United States)

    Li, Gongxin; Li, Peng; Wang, Yuechao; Wang, Wenxue; Xi, Ning; Liu, Lianqing

    2014-07-01

    Scanning Ion Conductance Microscopy (SICM) is one kind of Scanning Probe Microscopies (SPMs), and it is widely used in imaging soft samples for many distinctive advantages. However, the scanning speed of SICM is much slower than other SPMs. Compressive sensing (CS) could improve scanning speed tremendously by breaking through the Shannon sampling theorem, but it still requires too much time in image reconstruction. Block compressive sensing can be applied to SICM imaging to further reduce the reconstruction time of sparse signals, and it has another unique application that it can achieve the function of image real-time display in SICM imaging. In this article, a new method of dividing blocks and a new matrix arithmetic operation were proposed to build the block compressive sensing model, and several experiments were carried out to verify the superiority of block compressive sensing in reducing imaging time and real-time display in SICM imaging.

  14. Robots deliver real benefits for PSE and G

    International Nuclear Information System (INIS)

    Anon.

    1990-01-01

    The US utility PSE and G has found that using robots in the nuclear industry can bring real rewards in terms of reduced exposures, improved maintenance, and cost savings. Their experience is described. (author)

  15. Interactive Exploration Robots: Human-Robotic Collaboration and Interactions

    Science.gov (United States)

    Fong, Terry

    2017-01-01

    For decades, NASA has employed different operational approaches for human and robotic missions. Human spaceflight missions to the Moon and in low Earth orbit have relied upon near-continuous communication with minimal time delays. During these missions, astronauts and mission control communicate interactively to perform tasks and resolve problems in real-time. In contrast, deep-space robotic missions are designed for operations in the presence of significant communication delay - from tens of minutes to hours. Consequently, robotic missions typically employ meticulously scripted and validated command sequences that are intermittently uplinked to the robot for independent execution over long periods. Over the next few years, however, we will see increasing use of robots that blend these two operational approaches. These interactive exploration robots will be remotely operated by humans on Earth or from a spacecraft. These robots will be used to support astronauts on the International Space Station (ISS), to conduct new missions to the Moon, and potentially to enable remote exploration of planetary surfaces in real-time. In this talk, I will discuss the technical challenges associated with building and operating robots in this manner, along with lessons learned from research conducted with the ISS and in the field.

  16. Preoperative magnetic resonance and intraoperative ultrasound fusion imaging for real-time neuronavigation in brain tumor surgery.

    Science.gov (United States)

    Prada, F; Del Bene, M; Mattei, L; Lodigiani, L; DeBeni, S; Kolev, V; Vetrano, I; Solbiati, L; Sakas, G; DiMeco, F

    2015-04-01

    Brain shift and tissue deformation during surgery for intracranial lesions are the main actual limitations of neuro-navigation (NN), which currently relies mainly on preoperative imaging. Ultrasound (US), being a real-time imaging modality, is becoming progressively more widespread during neurosurgical procedures, but most neurosurgeons, trained on axial computed tomography (CT) and magnetic resonance imaging (MRI) slices, lack specific US training and have difficulties recognizing anatomic structures with the same confidence as in preoperative imaging. Therefore real-time intraoperative fusion imaging (FI) between preoperative imaging and intraoperative ultrasound (ioUS) for virtual navigation (VN) is highly desirable. We describe our procedure for real-time navigation during surgery for different cerebral lesions. We performed fusion imaging with virtual navigation for patients undergoing surgery for brain lesion removal using an ultrasound-based real-time neuro-navigation system that fuses intraoperative cerebral ultrasound with preoperative MRI and simultaneously displays an MRI slice coplanar to an ioUS image. 58 patients underwent surgery at our institution for intracranial lesion removal with image guidance using a US system equipped with fusion imaging for neuro-navigation. In all cases the initial (external) registration error obtained by the corresponding anatomical landmark procedure was below 2 mm and the craniotomy was correctly placed. The transdural window gave satisfactory US image quality and the lesion was always detectable and measurable on both axes. Brain shift/deformation correction has been successfully employed in 42 cases to restore the co-registration during surgery. The accuracy of ioUS/MRI fusion/overlapping was confirmed intraoperatively under direct visualization of anatomic landmarks and the error was surgery and is less expensive and time-consuming than other intraoperative imaging techniques, offering high precision and

  17. Real-time RGB-D image stitching using multiple Kinects for improved field of view

    Directory of Open Access Journals (Sweden)

    Hengyu Li

    2017-03-01

    Full Text Available This article concerns the problems of a defective depth map and limited field of view of Kinect-style RGB-D sensors. An anisotropic diffusion based hole-filling method is proposed to recover invalid depth data in the depth map. The field of view of the Kinect-style RGB-D sensor is extended by stitching depth and color images from several RGB-D sensors. By aligning the depth map with the color image, the registration data calculated by registering color images can be used to stitch depth and color images into a depth and color panoramic image concurrently in real time. Experiments show that the proposed stitching method can generate a RGB-D panorama with no invalid depth data and little distortion in real time and can be extended to incorporate more RGB-D sensors to construct even a 360° field of view panoramic RGB-D image.

  18. A Visual Environment for Real-Time Image Processing in Hardware (VERTIPH

    Directory of Open Access Journals (Sweden)

    Johnston CT

    2006-01-01

    Full Text Available Real-time video processing is an image-processing application that is ideally suited to implementation on FPGAs. We discuss the strengths and weaknesses of a number of existing languages and hardware compilers that have been developed for specifying image processing algorithms on FPGAs. We propose VERTIPH, a new multiple-view visual language that avoids the weaknesses we identify. A VERTIPH design incorporates three different views, each tailored to a different aspect of the image processing system under development; an overall architectural view, a computational view, and a resource and scheduling view.

  19. Autonomous stair-climbing with miniature jumping robots.

    Science.gov (United States)

    Stoeter, Sascha A; Papanikolopoulos, Nikolaos

    2005-04-01

    The problem of vision-guided control of miniature mobile robots is investigated. Untethered mobile robots with small physical dimensions of around 10 cm or less do not permit powerful onboard computers because of size and power constraints. These challenges have, in the past, reduced the functionality of such devices to that of a complex remote control vehicle with fancy sensors. With the help of a computationally more powerful entity such as a larger companion robot, the control loop can be closed. Using the miniature robot's video transmission or that of an observer to localize it in the world, control commands can be computed and relayed to the inept robot. The result is a system that exhibits autonomous capabilities. The framework presented here solves the problem of climbing stairs with the miniature Scout robot. The robot's unique locomotion mode, the jump, is employed to hop one step at a time. Methods for externally tracking the Scout are developed. A large number of real-world experiments are conducted and the results discussed.

  20. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    Science.gov (United States)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  1. TH-CD-207A-08: Simulated Real-Time Image Guidance for Lung SBRT Patients Using Scatter Imaging

    International Nuclear Information System (INIS)

    Redler, G; Cifter, G; Templeton, A; Lee, C; Bernard, D; Liao, Y; Zhen, H; Turian, J; Chu, J

    2016-01-01

    Purpose: To develop a comprehensive Monte Carlo-based model for the acquisition of scatter images of patient anatomy in real-time, during lung SBRT treatment. Methods: During SBRT treatment, images of patient anatomy can be acquired from scattered radiation. To rigorously examine the utility of scatter images for image guidance, a model is developed using MCNP code to simulate scatter images of phantoms and lung cancer patients. The model is validated by comparing experimental and simulated images of phantoms of different complexity. The differentiation between tissue types is investigated by imaging objects of known compositions (water, lung, and bone equivalent). A lung tumor phantom, simulating materials and geometry encountered during lung SBRT treatments, is used to investigate image noise properties for various quantities of delivered radiation (monitor units(MU)). Patient scatter images are simulated using the validated simulation model. 4DCT patient data is converted to an MCNP input geometry accounting for different tissue composition and densities. Lung tumor phantom images acquired with decreasing imaging time (decreasing MU) are used to model the expected noise amplitude in patient scatter images, producing realistic simulated patient scatter images with varying temporal resolution. Results: Image intensity in simulated and experimental scatter images of tissue equivalent objects (water, lung, bone) match within the uncertainty (∼3%). Lung tumor phantom images agree as well. Specifically, tumor-to-lung contrast matches within the uncertainty. The addition of random noise approximating quantum noise in experimental images to simulated patient images shows that scatter images of lung tumors can provide images in as fast as 0.5 seconds with CNR∼2.7. Conclusions: A scatter imaging simulation model is developed and validated using experimental phantom scatter images. Following validation, lung cancer patient scatter images are simulated. These simulated

  2. TH-CD-207A-08: Simulated Real-Time Image Guidance for Lung SBRT Patients Using Scatter Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Redler, G; Cifter, G; Templeton, A; Lee, C; Bernard, D; Liao, Y; Zhen, H; Turian, J; Chu, J [Rush University Medical Center, Chicago, IL (United States)

    2016-06-15

    Purpose: To develop a comprehensive Monte Carlo-based model for the acquisition of scatter images of patient anatomy in real-time, during lung SBRT treatment. Methods: During SBRT treatment, images of patient anatomy can be acquired from scattered radiation. To rigorously examine the utility of scatter images for image guidance, a model is developed using MCNP code to simulate scatter images of phantoms and lung cancer patients. The model is validated by comparing experimental and simulated images of phantoms of different complexity. The differentiation between tissue types is investigated by imaging objects of known compositions (water, lung, and bone equivalent). A lung tumor phantom, simulating materials and geometry encountered during lung SBRT treatments, is used to investigate image noise properties for various quantities of delivered radiation (monitor units(MU)). Patient scatter images are simulated using the validated simulation model. 4DCT patient data is converted to an MCNP input geometry accounting for different tissue composition and densities. Lung tumor phantom images acquired with decreasing imaging time (decreasing MU) are used to model the expected noise amplitude in patient scatter images, producing realistic simulated patient scatter images with varying temporal resolution. Results: Image intensity in simulated and experimental scatter images of tissue equivalent objects (water, lung, bone) match within the uncertainty (∼3%). Lung tumor phantom images agree as well. Specifically, tumor-to-lung contrast matches within the uncertainty. The addition of random noise approximating quantum noise in experimental images to simulated patient images shows that scatter images of lung tumors can provide images in as fast as 0.5 seconds with CNR∼2.7. Conclusions: A scatter imaging simulation model is developed and validated using experimental phantom scatter images. Following validation, lung cancer patient scatter images are simulated. These simulated

  3. Instruments for radiation measurement in life sciences (5). Development of imaging technology in life science. 4. Real-time bioradiography

    International Nuclear Information System (INIS)

    Sasaki, Toru; Iwamoto, Akinori; Tsuboi, Hisashi; Katoh, Toru; Kudo, Hiroyuki; Kazawa, Erito; Watanabe, Yasuyoshi

    2006-01-01

    Real-time bioradiography, new bioradiography method, can collect and produce image of metabolism and function of cell in real-time. The principles of instrumentation, development process and the application examples of neuroscience and biomedical gerontology are stated. The bioradiography method, the gas-tissue live-cell autoradiography method and the real-time bioradiography method are explained. As the application examples, the molecular mechanism of oxidative stress at brain ischemia and the analysis of SOD gene knockout animals are reported. Comparison between FDG-PET of epileptic brain and FDG- bioradiography image of live-cell of brain tissue, the real-time bioradiography system, improvement of image by surface treatment, the detection limit of β + ray from F 18 , image of living-slices of brain tissue by FDG-real-time bioradiography and radioluminography, continuous FDG image of living-slices of rat brain tissue, and analysis of carbohydrate metabolism of living-slices of brain tissue of mouse lacking SOD gene during aerophobia and reoxygenation process are reported. (S.Y.)

  4. In vivo real-time multiphoton imaging of T lymphocytes in the mouse brain after experimental stroke

    DEFF Research Database (Denmark)

    Fumagalli, Stefano; Coles, Jonathan A; Ejlerskov, Patrick

    2011-01-01

    To gain a better understanding of T cell behavior after stroke, we have developed real-time in vivo brain imaging of T cells by multiphoton microscopy after middle cerebral artery occlusion.......To gain a better understanding of T cell behavior after stroke, we have developed real-time in vivo brain imaging of T cells by multiphoton microscopy after middle cerebral artery occlusion....

  5. Development and Performance Evaluation of Image-Based Robotic Waxing System for Detailing Automobiles.

    Science.gov (United States)

    Lin, Chi-Ying; Hsu, Bing-Cheng

    2018-05-14

    Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme.

  6. Real-time imaging of {sup 35}S-sulfate uptake in a rape seed plant

    Energy Technology Data Exchange (ETDEWEB)

    Nakanishi, T.M.; Yamawaki, M.; Ishibashi, H.; Tanoi, K. [Tokyo Univ. (Japan). Lab. of Radioisotope Plant Physiology

    2011-07-01

    We present real-time images of {sup 35}S-sulfate uptake in a rapeseed plant visualized by the system we developed. In the leaves of rapeseed plants, {sup 35}S accumulated in higher amounts and more rapidly in the more developed leaves. This real-time imaging system can be used to visualize the movement of both {sup 35}S and {sup 32}P in the same plant. In the pods of rapeseed, images of {sup 35}S show that {sup 35}S accumulated mostly in the terminal parts; on the other hand {sup 32}P, when applied as {sup 32}P-phosphoric acid, accumulated in the middle part of the pods. (orig.)

  7. SU-F-BRE-05: Development and Evaluation of a Real-Time Robotic 6D Quality Assurance Phantom

    Energy Technology Data Exchange (ETDEWEB)

    Belcher, AH; Liu, X; Grelewicz, Z; Wiersma, RD [The University of Chicago, Chicago, IL (United States)

    2014-06-15

    Purpose: A 6 degree-of-freedom robotic phantom capable of reproducing dynamic tumor motion in 6D was designed to more effectively match solid tumor movements throughout pre-treatment scanning and radiation therapy. With the abundance of optical and x-ray 6D real-time tumor tracking methodologies clinically available, and the substantial dosimetric consequences of failing to consider tumor rotation as well as translation, this work presents the development and evaluation of a 6D instrument with the facility to improve quality assurance. Methods: An in-house designed and built 6D robotic motion phantom was constructed following the so-called Stewart-Gough parallel kinematics platform archetype. The device was then controlled using an inverse kinematics formulation, and precise movements in all six degrees of freedom (X, Y, Z, pitch, roll, and yaw) as well as previously obtained cranial motion, were effectively executed. The robotic phantom movements were verified using a 15 fps 6D infrared marker tracking system (Polaris, NDI), and quantitatively compared to the input trajectory. Thus, the accuracy and repeatability of 6D motion was investigated and the phantom performance was characterized. Results: Evaluation of the 6D platform demonstrated translational RMSE values of 0.196 mm, 0.260 mm, and 0.101 mm over 20 mm in X and Y and 10 mm in Z, respectively, and rotational RMSE values of 0.068 degrees, 0.0611 degrees, and 0.095 degrees over 10 degrees of pitch, roll, and yaw, respectively. The robotic stage also effectively performed controlled 6D motions, as well as reproduced cranial trajectories over 15 minutes, with a maximal RMSE of 0.044 mm translationally and 0.036 degrees rotationally. Conclusion: This 6D robotic phantom has proven to be accurate under clinical standards and capable of reproducing tumor motion in 6D. Consequently, such a robotics device has the potential to serve as a more effective system for IGRT QA that involves both translational and

  8. SU-F-BRE-05: Development and Evaluation of a Real-Time Robotic 6D Quality Assurance Phantom

    International Nuclear Information System (INIS)

    Belcher, AH; Liu, X; Grelewicz, Z; Wiersma, RD

    2014-01-01

    Purpose: A 6 degree-of-freedom robotic phantom capable of reproducing dynamic tumor motion in 6D was designed to more effectively match solid tumor movements throughout pre-treatment scanning and radiation therapy. With the abundance of optical and x-ray 6D real-time tumor tracking methodologies clinically available, and the substantial dosimetric consequences of failing to consider tumor rotation as well as translation, this work presents the development and evaluation of a 6D instrument with the facility to improve quality assurance. Methods: An in-house designed and built 6D robotic motion phantom was constructed following the so-called Stewart-Gough parallel kinematics platform archetype. The device was then controlled using an inverse kinematics formulation, and precise movements in all six degrees of freedom (X, Y, Z, pitch, roll, and yaw) as well as previously obtained cranial motion, were effectively executed. The robotic phantom movements were verified using a 15 fps 6D infrared marker tracking system (Polaris, NDI), and quantitatively compared to the input trajectory. Thus, the accuracy and repeatability of 6D motion was investigated and the phantom performance was characterized. Results: Evaluation of the 6D platform demonstrated translational RMSE values of 0.196 mm, 0.260 mm, and 0.101 mm over 20 mm in X and Y and 10 mm in Z, respectively, and rotational RMSE values of 0.068 degrees, 0.0611 degrees, and 0.095 degrees over 10 degrees of pitch, roll, and yaw, respectively. The robotic stage also effectively performed controlled 6D motions, as well as reproduced cranial trajectories over 15 minutes, with a maximal RMSE of 0.044 mm translationally and 0.036 degrees rotationally. Conclusion: This 6D robotic phantom has proven to be accurate under clinical standards and capable of reproducing tumor motion in 6D. Consequently, such a robotics device has the potential to serve as a more effective system for IGRT QA that involves both translational and

  9. Real-Time Ultrasound/MRI Fusion for Suprasacral Parallel Shift Approach to Lumbosacral Plexus Blockade and Analysis of Injectate Spread

    DEFF Research Database (Denmark)

    Strid, Jennie Maria Christin; Pedersen, Erik Morre; Al-Karradi, Sinan Naseer Hussain

    2017-01-01

    Fused real-time ultrasound and magnetic resonance imaging (MRI) may be used to improve the accuracy of advanced image guided procedures. However, its use in regional anesthesia is practically nonexistent. In this randomized controlled crossover trial, we aim to explore effectiveness, procedure-re...

  10. Visual Trajectory-Tracking Model-Based Control for Mobile Robots

    Directory of Open Access Journals (Sweden)

    Andrej Zdešar

    2013-09-01

    Full Text Available In this paper we present a visual-control algorithm for driving a mobile robot along the reference trajectory. The configuration of the system consists of a two-wheeled differentially driven mobile robot that is observed by an overhead camera, which can be placed at arbitrary, but reasonable, inclination with respect to the ground plane. The controller must be capable of generating appropriate tangential and angular control velocities for the trajectory-tracking problem, based on the information received about the robot position obtained in the image. To be able to track the position of the robot through a sequence of images in real-time, the robot is marked with an artificial marker that can be distinguishably recognized by the image recognition subsystem. Using the property of differential flatness, a dynamic feedback compensator can be designed for the system, thereby extending the system into a linear form. The presented control algorithm for reference tracking combines a feedforward and a feedback loop, the structure also known as a two DOF control scheme. The feedforward part should drive the system to the vicinity of the reference trajectory and the feedback part should eliminate any errors that occur due to noise and other disturbances etc. The feedforward control can never achieve accurate reference following, but this deficiency can be eliminated with the introduction of the feedback loop. The design of the model predictive control is based on the linear error model. The model predictive control is given in analytical form, so the computational burden is kept at a reasonable level for real-time implementation. The control algorithm requires that a reference trajectory is at least twice differentiable function. A suitable approach to design such a trajectory is by exploiting some useful properties of the Bernstein-Bézier parametric curves. The simulation experiments as well as real system experiments on a robot normally used in the

  11. Real-time synthetic aperture imaging: opportunities and challenges

    DEFF Research Database (Denmark)

    Nikolov, Svetoslav; Tomov, Borislav Gueorguiev; Jensen, Jørgen Arendt

    2006-01-01

    the development and implementation of the signal processing stages employed in SA imaging: compression of received data acquired using codes, and beamforming. The goal was to implement the system using commercially available field programmable gate arrays. The compression filter operates on frequency modulated...... pulses with duration of up to 50 mus sampled at 70 MHz. The beamformer can process data from 256 channels at a pulse repetition frequency of 5000 Hz and produces 192 lines of 1024 complex samples in real time. The lines are described by their origin, direction, length and distance between two samples...

  12. Real-time imaging of radioisotope labeled compounds in a living plant

    International Nuclear Information System (INIS)

    Kanno, S.; Ohya, T.; Hayashi, Y.; Tanoi, K.; Nakanishi, T.M.

    2007-01-01

    We developed a quantitative, real-time imaging system of labeled compounds in a living plant. The system was composed of CsI scintillator to convert β-rays to visible light and an image intensifier unit (composed of GaAsP semiconductor and MCP; micro channel plate) to detect extremely weak light. When the sensitivity and resolution of the image of our system was compared with that of an imaging plate (IP), the sensitivity of our system (with 20 minutes) was higher than that of an IP, with similar quality to that of an IP. Using this system, the translocation of 32 P in a soybean plant tissue was shown in successive images. (author)

  13. STELLA: 10 years of robotic observations on Tenerife

    Science.gov (United States)

    Weber, Michael; Granzer, Thomas; Strassmeier, Klaus G.

    2016-07-01

    STELLA is a robotic observatory on Tenerife housing two 1.2m robotic telescopes. One telescope is fibre-feeding a high-resolution (R=55,000) échelle spectrograph (SES), while the other telescope is equipped with a visible wide- field (FOV=22' x 22') imaging instrument (WiFSIP). Robotic observations started mid 2006, and the primary scientific driver is monitoring of stellar-activity related phenomena. The STELLA Control System (SCS) software package was originally tailored to the STELLA roll-off style building and high-resolution spectroscopy, but was extended over the years to support the wide-field imager, an off-axis guider for the imager, separate acquisition telescopes, classical domes, and targets-of-opportunity. The SCS allows for unattended, off-line operation of the observatory, targets can be uploaded at any time and are selected based on merit-functions in real-time (dispatch scheduling). We report on the current status of the observatory and the current capabilities of the SCS.

  14. qF-SSOP: real-time optical property corrected fluorescence imaging

    Science.gov (United States)

    Valdes, Pablo A.; Angelo, Joseph P.; Choi, Hak Soo; Gioux, Sylvain

    2017-01-01

    Fluorescence imaging is well suited to provide image guidance during resections in oncologic and vascular surgery. However, the distorting effects of tissue optical properties on the emitted fluorescence are poorly compensated for on even the most advanced fluorescence image guidance systems, leading to subjective and inaccurate estimates of tissue fluorophore concentrations. Here we present a novel fluorescence imaging technique that performs real-time (i.e., video rate) optical property corrected fluorescence imaging. We perform full field of view simultaneous imaging of tissue optical properties using Single Snapshot of Optical Properties (SSOP) and fluorescence detection. The estimated optical properties are used to correct the emitted fluorescence with a quantitative fluorescence model to provide quantitative fluorescence-Single Snapshot of Optical Properties (qF-SSOP) images with less than 5% error. The technique is rigorous, fast, and quantitative, enabling ease of integration into the surgical workflow with the potential to improve molecular guidance intraoperatively. PMID:28856038

  15. SU-F-303-17: Real Time Dose Calculation of MRI Guided Co-60 Radiotherapy Treatments On Free Breathing Patients, Using a Motion Model and Fast Monte Carlo Dose Calculation

    International Nuclear Information System (INIS)

    Thomas, D; O’Connell, D; Lamb, J; Cao, M; Yang, Y; Agazaryan, N; Lee, P; Low, D

    2015-01-01

    Purpose: To demonstrate real-time dose calculation of free-breathing MRI guided Co−60 treatments, using a motion model and Monte-Carlo dose calculation to accurately account for the interplay between irregular breathing motion and an IMRT delivery. Methods: ViewRay Co-60 dose distributions were optimized on ITVs contoured from free-breathing CT images of lung cancer patients. Each treatment plan was separated into 0.25s segments, accounting for the MLC positions and beam angles at each time point. A voxel-specific motion model derived from multiple fast-helical free-breathing CTs and deformable registration was calculated for each patient. 3D images for every 0.25s of a simulated treatment were generated in real time, here using a bellows signal as a surrogate to accurately account for breathing irregularities. Monte-Carlo dose calculation was performed every 0.25s of the treatment, with the number of histories in each calculation scaled to give an overall 1% statistical uncertainty. Each dose calculation was deformed back to the reference image using the motion model and accumulated. The static and real-time dose calculations were compared. Results: Image generation was performed in real time at 4 frames per second (GPU). Monte-Carlo dose calculation was performed at approximately 1frame per second (CPU), giving a total calculation time of approximately 30 minutes per treatment. Results show both cold- and hot-spots in and around the ITV, and increased dose to contralateral lung as the tumor moves in and out of the beam during treatment. Conclusion: An accurate motion model combined with a fast Monte-Carlo dose calculation allows almost real-time dose calculation of a free-breathing treatment. When combined with sagittal 2D-cine-mode MRI during treatment to update the motion model in real time, this will allow the true delivered dose of a treatment to be calculated, providing a useful tool for adaptive planning and assessing the effectiveness of gated treatments

  16. Objective specific beam generation for image guided robotic radiosurgery

    International Nuclear Information System (INIS)

    Schlaefer, A.; Jungmann, O.; Schweikard, A.; Kilby, W.

    2007-01-01

    Robotic radiosurgery enables precise dose delivery throughout the body. Planning for robotic radiosurgery comprises of finding a suitable set of beams and beam weights. The problem can be addressed by generating a large set of candidate beams, and selection of beams with nonzero weight by mathematical programming. We propose to use different randomized beam generation methods depending on the type of lesion and the clinical objective. Results for three patient cases indicate that this can improve the plan quality. (orig.)

  17. Objective specific beam generation for image guided robotic radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Schlaefer, A.; Jungmann, O.; Schweikard, A. [Inst. for Robotics and Cognitive Systems, Univ. of Luebeck (Germany); Kilby, W. [Accuray Inc., Sunnyvale, CA (United States)

    2007-06-15

    Robotic radiosurgery enables precise dose delivery throughout the body. Planning for robotic radiosurgery comprises of finding a suitable set of beams and beam weights. The problem can be addressed by generating a large set of candidate beams, and selection of beams with nonzero weight by mathematical programming. We propose to use different randomized beam generation methods depending on the type of lesion and the clinical objective. Results for three patient cases indicate that this can improve the plan quality. (orig.)

  18. Applications of Near Real-Time Image and Fire Products from MODIS

    Science.gov (United States)

    Schmaltz, J. E.; Ilavajhala, S.; Teague, M.; Ye, G.; Masuoka, E.; Davies, D.; Murphy, K. J.; Michael, K.

    2010-12-01

    NASA’s MODIS Rapid Response Project (http://rapidfire.sci.gsfc.nasa.gov/) has been providing MODIS fire detections and imagery in near real-time since 2001. The Rapid Response system is part of the Land and Atmospheres Near-real time Capability for EOS (LANCE-MODIS) system. Current capabilities include providing MODIS imagery in true color and false color band combinations, a vegetation index, and temperature - in both uncorrected swath format and geographically corrected subset regions. The geographically-corrected subsets images cover the world's land areas and adjoining waters, as well as the entire Arctic and Antarctic. These data are available within a few hours of data acquisition. The images are accessed by large number of user communities to obtain a rapid, 250 meter-resolution overview of ground conditions for fire management, crop and famine monitoring and forecasting, disaster response (fires, oil spills, floods, storms), dust and aerosol monitoring, aviation (tracking volcanic ash), monitoring sea ice conditions, environmental monitoring, and more. In addition, the scientific community uses imagery to locate phenomena of interest prior to ordering and processing data and to support the day-to-day planning of field campaigns. The MODIS Rapid Response project has also been providing a near real-time data feed on fire locations and MODIS imagery subsets to the Fire Information for Resource Management System (FIRMS) project (http://maps.geog.umd.edu/firms). FIRMS provides timely availability of fire location information, which is essential in preventing and fighting large forest/wild fires. Products are available through a WebGIS for visualizing MODIS hotspots and MCD45 Burned Area images, an email alerting tool to deliver fire data on daily/weekly/near real-time basis, active data downloads in formats such as shape, KML, CSV, WMS, etc., along with MODIS imagery subsets. FIRMS’ user base covers more than 100 countries and territories. A recent user

  19. Effect of residual patient motion on dose distribution during image-guided robotic radiosurgery for skull tracking based on log file analysis

    International Nuclear Information System (INIS)

    Inoue, Mitsuhiro; Shiomi, Hiroya; Sato, Kengo

    2014-01-01

    The present study aimed to assess the effect of residual patient motion on dose distribution during intracranial image-guided robotic radiosurgery by analyzing the system log files. The dosimetric effect was analyzed according to the difference between the original and estimated dose distributions, including targeting error, caused by residual patient motion between two successive image acquisitions. One hundred twenty-eight treatments were analyzed. Forty-two patients were treated using the isocentric plan, and 86 patients were treated using the conformal (non-isocentric) plan. The median distance from the imaging center to the target was 55 mm, and the median interval between the acquisitions of sequential images was 79 s. The median translational residual patient motion was 0.1 mm for each axis, and the rotational residual patient motion was 0.1 deg for Δpitch and Δroll and 0.2 deg for Δyaw. The dose error for D 95 was within 1% in more than 95% of cases. The maximum dose error for D 10 to D 90 was within 2%. None of the studied parameters, including the interval between the acquisitions of sequential images, was significantly related to the dosimetric effect. The effect of residual patient motion on dose distribution was minimal. (author)

  20. Real-Time Control System for Improved Precision and Throughput in an Ultrafast Carbon Fiber Placement Robot Using a SoC FPGA Extended Processing Platform

    Directory of Open Access Journals (Sweden)

    Gilberto Ochoa-Ruiz

    2017-01-01

    Full Text Available We present an architecture for accelerating the processing and execution of control commands in an ultrafast fiber placement robot. The system consists of a robotic arm designed by Coriolis Composites whose purpose is to move along a surface, on which composite fibers are deposed, via an independently controlled head. In first system implementation, the control commands were sent via Profibus by a PLC, limiting the reaction time and thus the precision of the fiber placement and the maximum throughput. Therefore, a custom real-time solution was imperative in order to ameliorate the performance and to meet the stringent requirements of the target industry (avionics, aeronautical systems. The solution presented in this paper is based on the use of a SoC FPGA processing platform running a real-time operating system (FreeRTOS, which has enabled an improved comamnd retrieval mechanism. The system’s placement precision was improved by a factor of 20 (from 1 mm to 0.05 mm, while the maximum achievable throughput was 1 m/s, compared to the average 30 cm/s provided by the original solution, enabling fabricating more complex and larger pieces in a significant fraction of the time.

  1. Interlaced photoacoustic and ultrasound imaging system with real-time coregistration for ovarian tissue characterization

    Science.gov (United States)

    Alqasemi, Umar; Li, Hai; Yuan, Guangqian; Kumavor, Patrick; Zanganeh, Saeid; Zhu, Quing

    2014-07-01

    Coregistered ultrasound (US) and photoacoustic imaging are emerging techniques for mapping the echogenic anatomical structure of tissue and its corresponding optical absorption. We report a 128-channel imaging system with real-time coregistration of the two modalities, which provides up to 15 coregistered frames per second limited by the laser pulse repetition rate. In addition, the system integrates a compact transvaginal imaging probe with a custom-designed fiber optic assembly for in vivo detection and characterization of human ovarian tissue. We present the coregistered US and photoacoustic imaging system structure, the optimal design of the PC interfacing software, and the reconfigurable field programmable gate array operation and optimization. Phantom experiments of system lateral resolution and axial sensitivity evaluation, examples of the real-time scanning of a tumor-bearing mouse, and ex vivo human ovaries studies are demonstrated.

  2. Developing stereo image based robot control system

    Energy Technology Data Exchange (ETDEWEB)

    Suprijadi,; Pambudi, I. R.; Woran, M.; Naa, C. F; Srigutomo, W. [Department of Physics, FMIPA, InstitutTeknologi Bandung Jl. Ganesha No. 10. Bandung 40132, Indonesia supri@fi.itb.ac.id (Indonesia)

    2015-04-16

    Application of image processing is developed in various field and purposes. In the last decade, image based system increase rapidly with the increasing of hardware and microprocessor performance. Many fields of science and technology were used this methods especially in medicine and instrumentation. New technique on stereovision to give a 3-dimension image or movie is very interesting, but not many applications in control system. Stereo image has pixel disparity information that is not existed in single image. In this research, we proposed a new method in wheel robot control system using stereovision. The result shows robot automatically moves based on stereovision captures.

  3. Augmented reality based real-time subcutaneous vein imaging system.

    Science.gov (United States)

    Ai, Danni; Yang, Jian; Fan, Jingfan; Zhao, Yitian; Song, Xianzheng; Shen, Jianbing; Shao, Ling; Wang, Yongtian

    2016-07-01

    A novel 3D reconstruction and fast imaging system for subcutaneous veins by augmented reality is presented. The study was performed to reduce the failure rate and time required in intravenous injection by providing augmented vein structures that back-project superimposed veins on the skin surface of the hand. Images of the subcutaneous vein are captured by two industrial cameras with extra reflective near-infrared lights. The veins are then segmented by a multiple-feature clustering method. Vein structures captured by the two cameras are matched and reconstructed based on the epipolar constraint and homographic property. The skin surface is reconstructed by active structured light with spatial encoding values and fusion displayed with the reconstructed vein. The vein and skin surface are both reconstructed in the 3D space. Results show that the structures can be precisely back-projected to the back of the hand for further augmented display and visualization. The overall system performance is evaluated in terms of vein segmentation, accuracy of vein matching, feature points distance error, duration times, accuracy of skin reconstruction, and augmented display. All experiments are validated with sets of real vein data. The imaging and augmented system produces good imaging and augmented reality results with high speed.

  4. Thoughts turned into high-level commands: Proof-of-concept study of a vision-guided robot arm driven by functional MRI (fMRI) signals.

    Science.gov (United States)

    Minati, Ludovico; Nigri, Anna; Rosazza, Cristina; Bruzzone, Maria Grazia

    2012-06-01

    Previous studies have demonstrated the possibility of using functional MRI to control a robot arm through a brain-machine interface by directly coupling haemodynamic activity in the sensory-motor cortex to the position of two axes. Here, we extend this work by implementing interaction at a more abstract level, whereby imagined actions deliver structured commands to a robot arm guided by a machine vision system. Rather than extracting signals from a small number of pre-selected regions, the proposed system adaptively determines at individual level how to map representative brain areas to the input nodes of a classifier network. In this initial study, a median action recognition accuracy of 90% was attained on five volunteers performing a game consisting of collecting randomly positioned coloured pawns and placing them into cups. The "pawn" and "cup" instructions were imparted through four mental imaginery tasks, linked to robot arm actions by a state machine. With the current implementation in MatLab language the median action recognition time was 24.3s and the robot execution time was 17.7s. We demonstrate the notion of combining haemodynamic brain-machine interfacing with computer vision to implement interaction at the level of high-level commands rather than individual movements, which may find application in future fMRI approaches relevant to brain-lesioned patients, and provide source code supporting further work on larger command sets and real-time processing. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. Mobile robot navigation in unknown static environments using ANFIS controller

    Directory of Open Access Journals (Sweden)

    Anish Pandey

    2016-09-01

    Full Text Available Navigation and obstacle avoidance are the most important task for any mobile robots. This article presents the Adaptive Neuro-Fuzzy Inference System (ANFIS controller for mobile robot navigation and obstacle avoidance in the unknown static environments. The different sensors such as ultrasonic range finder sensor and sharp infrared range sensor are used to detect the forward obstacles in the environments. The inputs of the ANFIS controller are obstacle distances obtained from the sensors, and the controller output is a robot steering angle. The primary objective of the present work is to use ANFIS controller to guide the mobile robot in the given environments. Computer simulations are conducted through MATLAB software and implemented in real time by using C/C++ language running Arduino microcontroller based mobile robot. Moreover, the successful experimental results on the actual mobile robot demonstrate the effectiveness and efficiency of the proposed controller.

  6. High-accuracy drilling with an image guided light weight robot: autonomous versus intuitive feed control.

    Science.gov (United States)

    Tauscher, Sebastian; Fuchs, Alexander; Baier, Fabian; Kahrs, Lüder A; Ortmaier, Tobias

    2017-10-01

    Assistance of robotic systems in the operating room promises higher accuracy and, hence, demanding surgical interventions become realisable (e.g. the direct cochlear access). Additionally, an intuitive user interface is crucial for the use of robots in surgery. Torque sensors in the joints can be employed for intuitive interaction concepts. Regarding the accuracy, they lead to a lower structural stiffness and, thus, to an additional error source. The aim of this contribution is to examine, if an accuracy needed for demanding interventions can be achieved by such a system or not. Feasible accuracy results of the robot-assisted process depend on each work-flow step. This work focuses on the determination of the tool coordinate frame. A method for drill axis definition is implemented and analysed. Furthermore, a concept of admittance feed control is developed. This allows the user to control feeding along the planned path by applying a force to the robots structure. The accuracy is researched by drilling experiments with a PMMA phantom and artificial bone blocks. The described drill axis estimation process results in a high angular repeatability ([Formula: see text]). In the first set of drilling results, an accuracy of [Formula: see text] at entrance and [Formula: see text] at target point excluding imaging was achieved. With admittance feed control an accuracy of [Formula: see text] at target point was realised. In a third set twelve holes were drilled in artificial temporal bone phantoms including imaging. In this set-up an error of [Formula: see text] and [Formula: see text] was achieved. The results of conducted experiments show that accuracy requirements for demanding procedures such as the direct cochlear access can be fulfilled with compliant systems. Furthermore, it was shown that with the presented admittance feed control an accuracy of less then [Formula: see text] is achievable.

  7. Real-time terahertz imaging through self-mixing in a quantum-cascade laser

    Energy Technology Data Exchange (ETDEWEB)

    Wienold, M., E-mail: martin.wienold@dlr.de; Rothbart, N.; Hübers, H.-W. [Institute of Optical Sensor Systems, German Aerospace Center (DLR), Rutherfordstr. 2, 12489 Berlin (Germany); Department of Physics, Humboldt-Universität zu Berlin, Newtonstr. 15, 12489 Berlin (Germany); Hagelschuer, T. [Institute of Optical Sensor Systems, German Aerospace Center (DLR), Rutherfordstr. 2, 12489 Berlin (Germany); Schrottke, L.; Biermann, K.; Grahn, H. T. [Paul-Drude-Institut für Festkörperelektronik, Leibniz-Institut im Forschungsverbund Berlin e. V., Hausvogteiplatz 5-7, 10117 Berlin (Germany)

    2016-07-04

    We report on a fast self-mixing approach for real-time, coherent terahertz imaging based on a quantum-cascade laser and a scanning mirror. Due to a fast deflection of the terahertz beam, images with frame rates up to several Hz are obtained, eventually limited by the mechanical inertia of the employed scanning mirror. A phase modulation technique allows for the separation of the amplitude and phase information without the necessity of parameter fitting routines. We further demonstrate the potential for transmission imaging.

  8. MO-FG-BRD-01: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: Introduction and KV Tracking

    International Nuclear Information System (INIS)

    Fahimian, B.

    2015-01-01

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow

  9. MO-FG-BRD-01: Real-Time Imaging and Tracking Techniques for Intrafractional Motion Management: Introduction and KV Tracking

    Energy Technology Data Exchange (ETDEWEB)

    Fahimian, B. [Stanford University (United States)

    2015-06-15

    Intrafraction target motion is a prominent complicating factor in the accurate targeting of radiation within the body. Methods compensating for target motion during treatment, such as gating and dynamic tumor tracking, depend on the delineation of target location as a function of time during delivery. A variety of techniques for target localization have been explored and are under active development; these include beam-level imaging of radio-opaque fiducials, fiducial-less tracking of anatomical landmarks, tracking of electromagnetic transponders, optical imaging of correlated surrogates, and volumetric imaging within treatment delivery. The Joint Imaging and Therapy Symposium will provide an overview of the techniques for real-time imaging and tracking, with special focus on emerging modes of implementation across different modalities. In particular, the symposium will explore developments in 1) Beam-level kilovoltage X-ray imaging techniques, 2) EPID-based megavoltage X-ray tracking, 3) Dynamic tracking using electromagnetic transponders, and 4) MRI-based soft-tissue tracking during radiation delivery. Learning Objectives: Understand the fundamentals of real-time imaging and tracking techniques Learn about emerging techniques in the field of real-time tracking Distinguish between the advantages and disadvantages of different tracking modalities Understand the role of real-time tracking techniques within the clinical delivery work-flow.

  10. An optical super-microscope for far-field, real-time imaging beyond the diffraction limit.

    Science.gov (United States)

    Wong, Alex M H; Eleftheriades, George V

    2013-01-01

    Optical microscopy suffers from a fundamental resolution limitation arising from the diffractive nature of light. While current solutions to sub-diffraction optical microscopy involve combinations of near-field, non-linear and fine scanning operations, we hereby propose and demonstrate the optical super-microscope (OSM) - a superoscillation-based linear imaging system with far-field working and observation distances - which can image an object in real-time and with sub-diffraction resolution. With our proof-of-principle prototype we report a point spread function with a spot size clearly reduced from the diffraction limit, and demonstrate corresponding improvements in two-point resolution experiments. Harnessing a new understanding of superoscillations, based on antenna array theory, our OSM achieves far-field, sub-diffraction optical imaging of an object without the need for fine scanning, data post-processing or object pre-treatment. Hence the OSM can be used in a wide variety of imaging applications beyond the diffraction limit, including real-time imaging of moving objects.

  11. Vintage meets contemporary: Use of rigid TBNA in the era of real-time imaging - first report from India.

    Science.gov (United States)

    Khan, Ajmal; Nath, Alok; Lal, Hira; Krishnani, Narendra; Agarwal, Aarti

    2018-01-01

    In the modern era, real-time imaging-guided transbronchial needle aspiration (TBNA) has completely replaced the traditional surgical approaches to sample the mediastinal lesions for diagnosis and cancer staging. However, there is a limited role of these innovations in the presence of critical airway narrowing due to a further decrease in cross-sectional area of the airway proportionate to the outer diameters of the scope. Rigid TBNA with airway control by rigid bronchoscopy is one alternative which can be used for mediastinal sampling when modern technique is impracticable. Herein, we report the use of rigid TBNA, an underutilized old method to sample the mediastinal lesion in a patient with severe orthopnea secondary to tracheal compression by mediastinal mass.

  12. Apparatus and method for modifying the operation of a robotic vehicle in a real environment, to emulate the operation of the robotic vehicle operating in a mixed reality environment

    Science.gov (United States)

    Garretson, Justin R [Albuquerque, NM; Parker, Eric P [Albuquerque, NM; Gladwell, T Scott [Albuquerque, NM; Rigdon, J Brian [Edgewood, NM; Oppel, III, Fred J.

    2012-05-29

    Apparatus and methods for modifying the operation of a robotic vehicle in a real environment to emulate the operation of the robotic vehicle in a mixed reality environment include a vehicle sensing system having a communications module attached to the robotic vehicle for communicating operating parameters related to the robotic vehicle in a real environment to a simulation controller for simulating the operation of the robotic vehicle in a mixed (live, virtual and constructive) environment wherein the affects of virtual and constructive entities on the operation of the robotic vehicle (and vice versa) are simulated. These effects are communicated to the vehicle sensing system which generates a modified control command for the robotic vehicle including the effects of virtual and constructive entities, causing the robot in the real environment to behave as if virtual and constructive entities existed in the real environment.

  13. Real-time intravital imaging of pH variation associated with osteoclast activity.

    Science.gov (United States)

    Maeda, Hiroki; Kowada, Toshiyuki; Kikuta, Junichi; Furuya, Masayuki; Shirazaki, Mai; Mizukami, Shin; Ishii, Masaru; Kikuchi, Kazuya

    2016-08-01

    Intravital imaging by two-photon excitation microscopy (TPEM) has been widely used to visualize cell functions. However, small molecular probes (SMPs), commonly used for cell imaging, cannot be simply applied to intravital imaging because of the challenge of delivering them into target tissues, as well as their undesirable physicochemical properties for TPEM imaging. Here, we designed and developed a functional SMP with an active-targeting moiety, higher photostability, and a fluorescence switch and then imaged target cell activity by injecting the SMP into living mice. The combination of the rationally designed SMP with a fluorescent protein as a reporter of cell localization enabled quantitation of osteoclast activity and time-lapse imaging of its in vivo function associated with changes in cell deformation and membrane fluctuations. Real-time imaging revealed heterogenic behaviors of osteoclasts in vivo and provided insights into the mechanism of bone resorption.

  14. Real-time MR diffusion tensor and Q-ball imaging using Kalman filtering

    International Nuclear Information System (INIS)

    Poupon, C.; Roche, A.; Dubois, J.; Mangin, J.F.; Poupon, F.

    2008-01-01

    Diffusion magnetic resonance imaging (dMRI) has become an established research tool for the investigation of tissue structure and orientation. In this paper, we present a method for real-time processing of diffusion tensor and Q-ball imaging. The basic idea is to use Kalman filtering framework to fit either the linear tensor or Q-ball model. Because the Kalman filter is designed to be an incremental algorithm, it naturally enables updating the model estimate after the acquisition of any new diffusion-weighted volume. Processing diffusion models and maps during ongoing scans provides a new useful tool for clinicians, especially when it is not possible to predict how long a subject may remain still in the magnet. First, we introduce the general linear models corresponding to the two diffusion tensor and analytical Q-ball models of interest. Then, we present the Kalman filtering framework and we focus on the optimization of the diffusion orientation sets in order to speed up the convergence of the online processing. Last, we give some results on a healthy volunteer for the online tensor and the Q-ball model, and we make some comparisons with the conventional offline techniques used in the literature. We could achieve full real-time for diffusion tensor imaging and deferred time for Q-ball imaging, using a single workstation. (authors)

  15. [Digital imaging and robotics in endoscopic surgery].

    Science.gov (United States)

    Go, P M

    1998-05-23

    The introduction of endoscopical surgery has among other things influenced technical developments in surgery. Owing to digitalisation, major progress will be made in imaging and in the sophisticated technology sometimes called robotics. Digital storage makes the results of imaging diagnostics (e.g. the results of radiological examination) suitable for transmission via video conference systems for telediagnostic purposes. The availability of digital video technique renders possible the processing, storage and retrieval of moving images as well. During endoscopical operations use may be made of a robot arm which replaces the camera man. The arm does not grow tired and provides a stable image. The surgeon himself can operate or address the arm and it can remember fixed image positions to which it can return if ordered to do so. The next step is to carry out surgical manipulations via a robot arm. This may make operations more patient-friendly. A robot arm can also have remote control: telerobotics. At the Internet site of this journal a number of supplements to this article can be found, for instance three-dimensional (3D) illustrations (which is the purpose of the 3D spectacles enclosed with this issue) and a quiz (http:@appendix.niwi. knaw.nl).

  16. Adaptive digital image processing in real time: First clinical experiences

    International Nuclear Information System (INIS)

    Andre, M.P.; Baily, N.A.; Hier, R.G.; Edwards, D.K.; Tainer, L.B.; Sartoris, D.J.

    1986-01-01

    The promise of computer image processing has generally not been realized in radiology, partly because the methods advanced to date have been expensive, time-consuming, or inconvenient for clinical use. The authors describe a low-cost system which performs complex image processing operations on-line at video rates. The method uses a combination of unsharp mask subtraction (for low-frequency suppression) and statistical differencing (which adjusts the gain at each point of the image on the basis of its variation from a local mean). The operator interactively adjusts aperture size, contrast gain, background subtraction, and spatial noise reduction. The system is being evaluated for on-line fluoroscopic enhancement, for which phantom measurements and clinical results, including lithotripsy, are presented. When used with a video camera, postprocessing of radiographs was advantageous in a variety of studies, including neonatal chest studies. Real-time speed allows use of the system in the reading room as a ''variable view box.''

  17. Development of a real-time imaging system for hypoxic cell apoptosis

    Directory of Open Access Journals (Sweden)

    Go Kagiya

    2016-01-01

    Full Text Available Hypoxic regions within the tumor form due to imbalances between cell proliferation and angiogenesis; specifically, temporary closure or a reduced flow due to abnormal vasculature. They create environments where cancer cells acquire resistance to therapies. Therefore, the development of therapeutic approaches targeting the hypoxic cells is one of the most crucial challenges for cancer regression. Screening potential candidates for effective diagnostic modalities even under a hypoxic environment would be an important first step. In this study, we describe the development of a real-time imaging system to monitor hypoxic cell apoptosis for such screening. The imaging system is composed of a cyclic luciferase (luc gene under the control of an improved hypoxic-responsive promoter. The cyclic luc gene product works as a caspase-3 (cas-3 monitor as it gains luc activity in response to cas-3 activation. The promoter composed of six hypoxic responsible elements and the CMV IE1 core promoter drives the effective expression of the cyclic luc gene in hypoxic conditions, enhancing hypoxic cell apoptosis visualization. We also confirmed real-time imaging of hypoxic cell apoptosis in the spheroid, which shares properties with the tumor. Thus, this constructed system could be a powerful tool for the development of effective anticancer diagnostic modalities.

  18. SU-G-BRA-01: A Real-Time Tumor Localization and Guidance Platform for Radiotherapy Using US and MRI

    International Nuclear Information System (INIS)

    Bednarz, B; Culberson, W; Bassetti, M; McMillan, A; Matrosic, C; Shepard, A; Zagzebski, J; Smith, S; Lee, W; Mills, D; Cao, K; Wang, B; Fiveland, E; Darrow, R; Foo, T

    2016-01-01

    Purpose: To develop and validate a real-time motion management platform for radiotherapy that directly tracks tumor motion using ultrasound and MRI. This will be a cost-effective and non-invasive real-time platform combining the excellent temporal resolution of ultrasound with the excellent soft-tissue contrast of MRI. Methods: A 4D planar ultrasound acquisition during the treatment that is coupled to a pre-treatment calibration training image set consisting of a simultaneous 4D ultrasound and 4D MRI acquisition. The image sets will be rapidly matched using advanced image and signal processing algorithms, allowing the display of virtual MR images of the tumor/organ motion in real-time from an ultrasound acquisition. Results: The completion of this work will result in several innovations including: a (2D) patch-like, MR and LINAC compatible 4D planar ultrasound transducer that is electronically steerable for hands-free operation to provide real-time virtual MR and ultrasound imaging for motion management during radiation therapy; a multi- modal tumor localization strategy that uses ultrasound and MRI; and fast and accurate image processing algorithms that provide real-time information about the motion and location of tumor or related soft-tissue structures within the patient. Conclusion: If successful, the proposed approach will provide real-time guidance for radiation therapy without degrading image or treatment plan quality. The approach would be equally suitable for image-guided proton beam or heavy ion-beam therapy. This work is partially funded by NIH grant R01CA190298

  19. SU-G-BRA-01: A Real-Time Tumor Localization and Guidance Platform for Radiotherapy Using US and MRI

    Energy Technology Data Exchange (ETDEWEB)

    Bednarz, B; Culberson, W; Bassetti, M; McMillan, A; Matrosic, C; Shepard, A; Zagzebski, J [University of Wisconsin, Madison, WI (United States); Smith, S; Lee, W; Mills, D; Cao, K; Wang, B; Fiveland, E; Darrow, R; Foo, T [GE Global Research Center, Niskayuna, NY (United States)

    2016-06-15

    Purpose: To develop and validate a real-time motion management platform for radiotherapy that directly tracks tumor motion using ultrasound and MRI. This will be a cost-effective and non-invasive real-time platform combining the excellent temporal resolution of ultrasound with the excellent soft-tissue contrast of MRI. Methods: A 4D planar ultrasound acquisition during the treatment that is coupled to a pre-treatment calibration training image set consisting of a simultaneous 4D ultrasound and 4D MRI acquisition. The image sets will be rapidly matched using advanced image and signal processing algorithms, allowing the display of virtual MR images of the tumor/organ motion in real-time from an ultrasound acquisition. Results: The completion of this work will result in several innovations including: a (2D) patch-like, MR and LINAC compatible 4D planar ultrasound transducer that is electronically steerable for hands-free operation to provide real-time virtual MR and ultrasound imaging for motion management during radiation therapy; a multi- modal tumor localization strategy that uses ultrasound and MRI; and fast and accurate image processing algorithms that provide real-time information about the motion and location of tumor or related soft-tissue structures within the patient. Conclusion: If successful, the proposed approach will provide real-time guidance for radiation therapy without degrading image or treatment plan quality. The approach would be equally suitable for image-guided proton beam or heavy ion-beam therapy. This work is partially funded by NIH grant R01CA190298.

  20. Multiprocessor development for robot control

    International Nuclear Information System (INIS)

    Lee, Jong Min; Kim, Byung Soo; Kim, Chang Hoi; Hwang, Suk Yong; Sohn, Surg Won; Yoon, Tae Seob; Lee, Yong Bum; Kim, Woong Ki

    1988-02-01

    A mutiprocessor system that is essential to A.I. (Artificial Intelligence) robot control was developed. A.I. robot control needs very complex real time control. The multiprocessor system interconnecting many SBC's (Single Board Computer) is much faster and accurater than using only one SBC. Various multiprocessor systems and their applications were compared and discussed. The multiprocessor architecture system is specially designed to be used in nuclear environments. The main functions are job distribution, multitasking, and intelligent remote control by SDLC protocol using optical fiber. The system can be applied to position control for locomotion and manipulation, data fusion system, and image processing. (Author)