WorldWideScience

Sample records for real-time 3d virtual

  1. Real-Time 3D Motion capture by monocular vision and virtual rendering

    OpenAIRE

    Gomez Jauregui , David Antonio; Horain , Patrick

    2012-01-01

    International audience; Avatars in networked 3D virtual environments allow users to interact over the Internet and to get some feeling of virtual telepresence. However, avatar control may be tedious. Motion capture systems based on 3D sensors have recently reached the consumer market, but webcams and camera-phones are more widespread and cheaper. The proposed demonstration aims at animating a user's avatar from real time 3D motion capture by monoscopic computer vision, thus allowing virtual t...

  2. Integration of virtual and real scenes within an integral 3D imaging environment

    Science.gov (United States)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  3. Real-time 3-dimensional virtual reality navigation system with open MRI for breast-conserving surgery

    International Nuclear Information System (INIS)

    Tomikawa, Morimasa; Konishi, Kozo; Ieiri, Satoshi; Hong, Jaesung; Uemura, Munenori; Hashizume, Makoto; Shiotani, Satoko; Tokunaga, Eriko; Maehara, Yoshihiko

    2011-01-01

    We report here the early experiences using a real-time three-dimensional (3D) virtual reality navigation system with open magnetic resonance imaging (MRI) for breast-conserving surgery (BCS). Two patients with a non-palpable MRI-detected breast tumor underwent BCS under the guidance of the navigation system. An initial MRI for the breast tumor using skin-affixed markers was performed immediately prior to excision. A percutaneous intramammary dye marker was applied to delineate an excision line, and the computer software '3D Slicer' generated a real-time 3D virtual reality model of the tumor and the puncture needle in the breast. Under guidance by the navigation system, marking procedures were performed without any difficulties. Fiducial registration errors were 3.00 mm for patient no.1, and 4.07 mm for patient no.2. The real-time 3D virtual reality navigation system with open MRI is feasible for safe and accurate excision of non-palpable MRI-detected breast tumors. (author)

  4. Development of real-time motion capture system for 3D on-line games linked with virtual character

    Science.gov (United States)

    Kim, Jong Hyeong; Ryu, Young Kee; Cho, Hyung Suck

    2004-10-01

    Motion tracking method is being issued as essential part of the entertainment, medical, sports, education and industry with the development of 3-D virtual reality. Virtual human character in the digital animation and game application has been controlled by interfacing devices; mouse, joysticks, midi-slider, and so on. Those devices could not enable virtual human character to move smoothly and naturally. Furthermore, high-end human motion capture systems in commercial market are expensive and complicated. In this paper, we proposed a practical and fast motion capturing system consisting of optic sensors, and linked the data with 3-D game character with real time. The prototype experiment setup is successfully applied to a boxing game which requires very fast movement of human character.

  5. PRIMAS: a real-time 3D motion-analysis system

    Science.gov (United States)

    Sabel, Jan C.; van Veenendaal, Hans L. J.; Furnee, E. Hans

    1994-03-01

    The paper describes a CCD TV-camera-based system for real-time multicamera 2D detection of retro-reflective targets and software for accurate and fast 3D reconstruction. Applications of this system can be found in the fields of sports, biomechanics, rehabilitation research, and various other areas of science and industry. The new feature of real-time 3D opens an even broader perspective of application areas; animations in virtual reality are an interesting example. After presenting an overview of the hardware and the camera calibration method, the paper focuses on the real-time algorithms used for matching of the images and subsequent 3D reconstruction of marker positions. When using a calibrated setup of two cameras, it is now possible to track at least ten markers at 100 Hz. Limitations in the performance are determined by the visibility of the markers, which could be improved by adding a third camera.

  6. 2D virtual texture on 3D real object with coded structured light

    Science.gov (United States)

    Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick

    2008-02-01

    Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.

  7. Real-Time 3D Visualization

    Science.gov (United States)

    1997-01-01

    Butler Hine, former director of the Intelligent Mechanism Group (IMG) at Ames Research Center, and five others partnered to start Fourth Planet, Inc., a visualization company that specializes in the intuitive visual representation of dynamic, real-time data over the Internet and Intranet. Over a five-year period, the then NASA researchers performed ten robotic field missions in harsh climes to mimic the end- to-end operations of automated vehicles trekking across another world under control from Earth. The core software technology for these missions was the Virtual Environment Vehicle Interface (VEVI). Fourth Planet has released VEVI4, the fourth generation of the VEVI software, and NetVision. VEVI4 is a cutting-edge computer graphics simulation and remote control applications tool. The NetVision package allows large companies to view and analyze in virtual 3D space such things as the health or performance of their computer network or locate a trouble spot on an electric power grid. Other products are forthcoming. Fourth Planet is currently part of the NASA/Ames Technology Commercialization Center, a business incubator for start-up companies.

  8. V-Man Generation for 3-D Real Time Animation. Chapter 5

    Science.gov (United States)

    Nebel, Jean-Christophe; Sibiryakov, Alexander; Ju, Xiangyang

    2007-01-01

    The V-Man project has developed an intuitive authoring and intelligent system to create, animate, control and interact in real-time with a new generation of 3D virtual characters: The V-Men. It combines several innovative algorithms coming from Virtual Reality, Physical Simulation, Computer Vision, Robotics and Artificial Intelligence. Given a high-level task like "walk to that spot" or "get that object", a V-Man generates the complete animation required to accomplish the task. V-Men synthesise motion at runtime according to their environment, their task and their physical parameters, drawing upon its unique set of skills manufactured during the character creation. The key to the system is the automated creation of realistic V-Men, not requiring the expertise of an animator. It is based on real human data captured by 3D static and dynamic body scanners, which is then processed to generate firstly animatable body meshes, secondly 3D garments and finally skinned body meshes.

  9. [Real time 3D echocardiography

    Science.gov (United States)

    Bauer, F.; Shiota, T.; Thomas, J. D.

    2001-01-01

    Three-dimensional representation of the heart is an old concern. Usually, 3D reconstruction of the cardiac mass is made by successive acquisition of 2D sections, the spatial localisation and orientation of which require complex guiding systems. More recently, the concept of volumetric acquisition has been introduced. A matricial emitter-receiver probe complex with parallel data processing provides instantaneous of a pyramidal 64 degrees x 64 degrees volume. The image is restituted in real time and is composed of 3 planes (planes B and C) which can be displaced in all spatial directions at any time during acquisition. The flexibility of this system of acquisition allows volume and mass measurement with greater accuracy and reproducibility, limiting inter-observer variability. Free navigation of the planes of investigation allows reconstruction for qualitative and quantitative analysis of valvular heart disease and other pathologies. Although real time 3D echocardiography is ready for clinical usage, some improvements are still necessary to improve its conviviality. Then real time 3D echocardiography could be the essential tool for understanding, diagnosis and management of patients.

  10. Integration of real-time 3D capture, reconstruction, and light-field display

    Science.gov (United States)

    Zhang, Zhaoxing; Geng, Zheng; Li, Tuotuo; Pei, Renjing; Liu, Yongchun; Zhang, Xiao

    2015-03-01

    Effective integration of 3D acquisition, reconstruction (modeling) and display technologies into a seamless systems provides augmented experience of visualizing and analyzing real objects and scenes with realistic 3D sensation. Applications can be found in medical imaging, gaming, virtual or augmented reality and hybrid simulations. Although 3D acquisition, reconstruction, and display technologies have gained significant momentum in recent years, there seems a lack of attention on synergistically combining these components into a "end-to-end" 3D visualization system. We designed, built and tested an integrated 3D visualization system that is able to capture in real-time 3D light-field images, perform 3D reconstruction to build 3D model of the objects, and display the 3D model on a large autostereoscopic screen. In this article, we will present our system architecture and component designs, hardware/software implementations, and experimental results. We will elaborate on our recent progress on sparse camera array light-field 3D acquisition, real-time dense 3D reconstruction, and autostereoscopic multi-view 3D display. A prototype is finally presented with test results to illustrate the effectiveness of our proposed integrated 3D visualization system.

  11. Augmented Virtuality: A Real-time Process for Presenting Real-world Visual Sensory Information in an Immersive Virtual Environment for Planetary Exploration

    Science.gov (United States)

    McFadden, D.; Tavakkoli, A.; Regenbrecht, J.; Wilson, B.

    2017-12-01

    Virtual Reality (VR) and Augmented Reality (AR) applications have recently seen an impressive growth, thanks to the advent of commercial Head Mounted Displays (HMDs). This new visualization era has opened the possibility of presenting researchers from multiple disciplines with data visualization techniques not possible via traditional 2D screens. In a purely VR environment researchers are presented with the visual data in a virtual environment, whereas in a purely AR application, a piece of virtual object is projected into the real world with which researchers could interact. There are several limitations to the purely VR or AR application when taken within the context of remote planetary exploration. For example, in a purely VR environment, contents of the planet surface (e.g. rocks, terrain, or other features) should be created off-line from a multitude of images using image processing techniques to generate 3D mesh data that will populate the virtual surface of the planet. This process usually takes a tremendous amount of computational resources and cannot be delivered in real-time. As an alternative, video frames may be superimposed on the virtual environment to save processing time. However, such rendered video frames will lack 3D visual information -i.e. depth information. In this paper, we present a technique to utilize a remotely situated robot's stereoscopic cameras to provide a live visual feed from the real world into the virtual environment in which planetary scientists are immersed. Moreover, the proposed technique will blend the virtual environment with the real world in such a way as to preserve both the depth and visual information from the real world while allowing for the sensation of immersion when the entire sequence is viewed via an HMD such as Oculus Rift. The figure shows the virtual environment with an overlay of the real-world stereoscopic video being presented in real-time into the virtual environment. Notice the preservation of the object

  12. 3D Hand Gesture Analysis through a Real-Time Gesture Search Engine

    Directory of Open Access Journals (Sweden)

    Shahrouz Yousefi

    2015-06-01

    Full Text Available 3D gesture recognition and tracking are highly desired features of interaction design in future mobile and smart environments. Specifically, in virtual/augmented reality applications, intuitive interaction with the physical space seems unavoidable and 3D gestural interaction might be the most effective alternative for the current input facilities such as touchscreens. In this paper, we introduce a novel solution for real-time 3D gesture-based interaction by finding the best match from an extremely large gesture database. This database includes images of various articulated hand gestures with the annotated 3D position/orientation parameters of the hand joints. Our unique matching algorithm is based on the hierarchical scoring of the low-level edge-orientation features between the query frames and database and retrieving the best match. Once the best match is found from the database in each moment, the pre-recorded 3D motion parameters can instantly be used for natural interaction. The proposed bare-hand interaction technology performs in real time with high accuracy using an ordinary camera.

  13. Real-Time 3D Face Acquisition Using Reconfigurable Hybrid Architecture

    Directory of Open Access Journals (Sweden)

    Mitéran Johel

    2007-01-01

    Full Text Available Acquiring 3D data of human face is a general problem which can be applied in face recognition, virtual reality, and many other applications. It can be solved using stereovision. This technique consists in acquiring data in three dimensions from two cameras. The aim is to implement an algorithmic chain which makes it possible to obtain a three-dimensional space from two two-dimensional spaces: two images coming from the two cameras. Several implementations have already been considered. We propose a new simple real-time implementation based on a hybrid architecture (FPGA-DSP, allowing to consider an embedded and reconfigurable processing. Then we show our method which provides depth map of face, dense and reliable, and which can be implemented on an embedded architecture. A various architecture study led us to a judicious choice allowing to obtain the desired result. The real-time data processing is implemented in an embedded architecture. We obtain a dense face disparity map, precise enough for considered applications (multimedia, virtual worlds, biometrics and using a reliable method.

  14. Real-time virtual EAST physical experiment system

    Energy Technology Data Exchange (ETDEWEB)

    Li, Dan, E-mail: lidan@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Xiao, B.J., E-mail: bjxiao@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); School of Nuclear Science and Technology, University of Science and Technology of China, Hefei, Anhui (China); Xia, J.Y., E-mail: jyxia@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Yang, Fei, E-mail: fyang@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Department of Computer Science, Anhui Medical University, Hefei, Anhui (China)

    2014-05-15

    Graphical abstract: - Highlights: • 3D model of experimental advanced superconducting tokamak is established. • Interaction behavior is created that the users can get information from database. • The system integrates data acquisition, plasma shape visualization and simulation. • Browser-oriented system is web-based and more interactive, immersive and convenient. • The system provides the framework for virtual physical experimental environment. - Abstract: As a large fusion reaction device, experimental advanced superconducting tokamak (EAST)’s internal structure is complicated and not easily accessible. Moreover, various diagnostic systems and complicated configuration bring about the inconveniency to the scientists who are unfamiliar with the system but interested in the data. We propose a virtual system to display the 3D model of EAST facility and enable people to view its inner structure and get access to the information of its components in various view sights. We would also provide most of the diagnostic configuration details together with their signal names and physical properties. Compared to the previous ways of viewing information by reference to collected drawings and videos, virtual EAST system is more interactive and immersive. We constructed the browser-oriented virtual EAST physical experiment system, integrated real-time experiment data acquisition, plasma shape visualization and experiment result simulation in order to reproduce physical experiments in a web browser. This system used B/S (Browser/Server) structure in combination with the technology of virtual reality – VRML (Virtual Reality Modeling Language) and Java 3D. In order to avoid the bandwidth limit across internet, we balanced the rendering speed and the precision of the virtual model components. Any registered user can view the experimental information visually and efficiently by logining the system through a web browser. The establishment of the system provides the

  15. Real-time virtual EAST physical experiment system

    International Nuclear Information System (INIS)

    Li, Dan; Xiao, B.J.; Xia, J.Y.; Yang, Fei

    2014-01-01

    Graphical abstract: - Highlights: • 3D model of experimental advanced superconducting tokamak is established. • Interaction behavior is created that the users can get information from database. • The system integrates data acquisition, plasma shape visualization and simulation. • Browser-oriented system is web-based and more interactive, immersive and convenient. • The system provides the framework for virtual physical experimental environment. - Abstract: As a large fusion reaction device, experimental advanced superconducting tokamak (EAST)’s internal structure is complicated and not easily accessible. Moreover, various diagnostic systems and complicated configuration bring about the inconveniency to the scientists who are unfamiliar with the system but interested in the data. We propose a virtual system to display the 3D model of EAST facility and enable people to view its inner structure and get access to the information of its components in various view sights. We would also provide most of the diagnostic configuration details together with their signal names and physical properties. Compared to the previous ways of viewing information by reference to collected drawings and videos, virtual EAST system is more interactive and immersive. We constructed the browser-oriented virtual EAST physical experiment system, integrated real-time experiment data acquisition, plasma shape visualization and experiment result simulation in order to reproduce physical experiments in a web browser. This system used B/S (Browser/Server) structure in combination with the technology of virtual reality – VRML (Virtual Reality Modeling Language) and Java 3D. In order to avoid the bandwidth limit across internet, we balanced the rendering speed and the precision of the virtual model components. Any registered user can view the experimental information visually and efficiently by logining the system through a web browser. The establishment of the system provides the

  16. Application of advanced virtual reality and 3D computer assisted technologies in tele-3D-computer assisted surgery in rhinology.

    Science.gov (United States)

    Klapan, Ivica; Vranjes, Zeljko; Prgomet, Drago; Lukinović, Juraj

    2008-03-01

    The real-time requirement means that the simulation should be able to follow the actions of the user that may be moving in the virtual environment. The computer system should also store in its memory a three-dimensional (3D) model of the virtual environment. In that case a real-time virtual reality system will update the 3D graphic visualization as the user moves, so that up-to-date visualization is always shown on the computer screen. Upon completion of the tele-operation, the surgeon compares the preoperative and postoperative images and models of the operative field, and studies video records of the procedure itself Using intraoperative records, animated images of the real tele-procedure performed can be designed. Virtual surgery offers the possibility of preoperative planning in rhinology. The intraoperative use of computer in real time requires development of appropriate hardware and software to connect medical instrumentarium with the computer and to operate the computer by thus connected instrumentarium and sophisticated multimedia interfaces.

  17. VERSE - Virtual Equivalent Real-time Simulation

    Science.gov (United States)

    Zheng, Yang; Martin, Bryan J.; Villaume, Nathaniel

    2005-01-01

    Distributed real-time simulations provide important timing validation and hardware in the- loop results for the spacecraft flight software development cycle. Occasionally, the need for higher fidelity modeling and more comprehensive debugging capabilities - combined with a limited amount of computational resources - calls for a non real-time simulation environment that mimics the real-time environment. By creating a non real-time environment that accommodates simulations and flight software designed for a multi-CPU real-time system, we can save development time, cut mission costs, and reduce the likelihood of errors. This paper presents such a solution: Virtual Equivalent Real-time Simulation Environment (VERSE). VERSE turns the real-time operating system RTAI (Real-time Application Interface) into an event driven simulator that runs in virtual real time. Designed to keep the original RTAI architecture as intact as possible, and therefore inheriting RTAI's many capabilities, VERSE was implemented with remarkably little change to the RTAI source code. This small footprint together with use of the same API allows users to easily run the same application in both real-time and virtual time environments. VERSE has been used to build a workstation testbed for NASA's Space Interferometry Mission (SIM PlanetQuest) instrument flight software. With its flexible simulation controls and inexpensive setup and replication costs, VERSE will become an invaluable tool in future mission development.

  18. A LOW-COST AND LIGHTWEIGHT 3D INTERACTIVE REAL ESTATE-PURPOSED INDOOR VIRTUAL REALITY APPLICATION

    Directory of Open Access Journals (Sweden)

    K. Ozacar

    2017-11-01

    Full Text Available Interactive 3D architectural indoor design have been more popular after it benefited from Virtual Reality (VR technologies. VR brings computer-generated 3D content to real life scale and enable users to observe immersive indoor environments so that users can directly modify it. This opportunity enables buyers to purchase a property off-the-plan cheaper through virtual models. Instead of showing property through 2D plan or renders, this visualized interior architecture of an on-sale unbuilt property is demonstrated beforehand so that the investors have an impression as if they were in the physical building. However, current applications either use highly resource consuming software, or are non-interactive, or requires specialist to create such environments. In this study, we have created a real-estate purposed low-cost high quality fully interactive VR application that provides a realistic interior architecture of the property by using free and lightweight software: Sweet Home 3D and Unity. A preliminary study showed that participants generally liked proposed real estate-purposed VR application, and it satisfied the expectation of the property buyers.

  19. a Low-Cost and Lightweight 3d Interactive Real Estate-Purposed Indoor Virtual Reality Application

    Science.gov (United States)

    Ozacar, K.; Ortakci, Y.; Kahraman, I.; Durgut, R.; Karas, I. R.

    2017-11-01

    Interactive 3D architectural indoor design have been more popular after it benefited from Virtual Reality (VR) technologies. VR brings computer-generated 3D content to real life scale and enable users to observe immersive indoor environments so that users can directly modify it. This opportunity enables buyers to purchase a property off-the-plan cheaper through virtual models. Instead of showing property through 2D plan or renders, this visualized interior architecture of an on-sale unbuilt property is demonstrated beforehand so that the investors have an impression as if they were in the physical building. However, current applications either use highly resource consuming software, or are non-interactive, or requires specialist to create such environments. In this study, we have created a real-estate purposed low-cost high quality fully interactive VR application that provides a realistic interior architecture of the property by using free and lightweight software: Sweet Home 3D and Unity. A preliminary study showed that participants generally liked proposed real estate-purposed VR application, and it satisfied the expectation of the property buyers.

  20. Real-time markerless tracking for augmented reality: the virtual visual servoing framework.

    Science.gov (United States)

    Comport, Andrew I; Marchand, Eric; Pressigout, Muriel; Chaumette, François

    2006-01-01

    Tracking is a very important research subject in a real-time augmented reality context. The main requirements for trackers are high accuracy and little latency at a reasonable cost. In order to address these issues, a real-time, robust, and efficient 3D model-based tracking algorithm is proposed for a "video see through" monocular vision system. The tracking of objects in the scene amounts to calculating the pose between the camera and the objects. Virtual objects can then be projected into the scene using the pose. Here, nonlinear pose estimation is formulated by means of a virtual visual servoing approach. In this context, the derivation of point-to-curves interaction matrices are given for different 3D geometrical primitives including straight lines, circles, cylinders, and spheres. A local moving edges tracker is used in order to provide real-time tracking of points normal to the object contours. Robustness is obtained by integrating an M-estimator into the visual control law via an iteratively reweighted least squares implementation. This approach is then extended to address the 3D model-free augmented reality problem. The method presented in this paper has been validated on several complex image sequences including outdoor environments. Results show the method to be robust to occlusion, changes in illumination, and mistracking.

  1. Interactive Scientific Visualization in 3D Virtual Reality Model

    Directory of Open Access Journals (Sweden)

    Filip Popovski

    2016-11-01

    Full Text Available Scientific visualization in technology of virtual reality is a graphical representation of virtual environment in the form of images or animation that can be displayed with various devices such as Head Mounted Display (HMD or monitors that can view threedimensional world. Research in real time is a desirable capability for scientific visualization and virtual reality in which we are immersed and make the research process easier. In this scientific paper the interaction between the user and objects in the virtual environment аrе in real time which gives a sense of reality to the user. Also, Quest3D VR software package is used and the movement of the user through the virtual environment, the impossibility to walk through solid objects, methods for grabbing objects and their displacement are programmed and all interactions between them will be possible. At the end some critical analysis were made on all of these techniques on various computer systems and excellent results were obtained.

  2. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    Science.gov (United States)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  3. 3D VISUALIZATION FOR VIRTUAL MUSEUM DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    M. Skamantzari

    2016-06-01

    Full Text Available The interest in the development of virtual museums is nowadays rising rapidly. During the last decades there have been numerous efforts concerning the 3D digitization of cultural heritage and the development of virtual museums, digital libraries and serious games. The realistic result has always been the main concern and a real challenge when it comes to 3D modelling of monuments, artifacts and especially sculptures. This paper implements, investigates and evaluates the results of the photogrammetric methods and 3D surveys that were used for the development of a virtual museum. Moreover, the decisions, the actions, the methodology and the main elements that this kind of application should include and take into consideration are described and analysed. It is believed that the outcomes of this application will be useful to researchers who are planning to develop and further improve the attempts made on virtual museums and mass production of 3D models.

  4. ArtifactVis2: Managing real-time archaeological data in immersive 3D environments

    KAUST Repository

    Smith, Neil

    2013-10-01

    In this paper, we present a stereoscopic research and training environment for archaeologists called ArtifactVis2. This application enables the management and visualization of diverse types of cultural datasets within a collaborative virtual 3D system. The archaeologist is fully immersed in a large-scale visualization of on-going excavations. Massive 3D datasets are seamlessly rendered in real-time with field recorded GIS data, 3D artifact scans and digital photography. Dynamic content can be visualized and cultural analytics can be performed on archaeological datasets collected through a rigorous digital archaeological methodology. The virtual collaborative environment provides a menu driven query system and the ability to annotate, markup, measure, and manipulate any of the datasets. These features enable researchers to re-experience and analyze the minute details of an archaeological site\\'s excavation. It enhances their visual capacity to recognize deep patterns and structures and perceive changes and reoccurrences. As a complement and development from previous work in the field of 3D immersive archaeological environments, ArtifactVis2 provides a GIS based immersive environment that taps directly into archaeological datasets to investigate cultural and historical issues of ancient societies and cultural heritage in ways not possible before. © 2013 IEEE.

  5. Simulation Study of Real Time 3-D Synthetic Aperture Sequential Beamforming for Ultrasound Imaging

    DEFF Research Database (Denmark)

    Hemmsen, Martin Christian; Rasmussen, Morten Fischer; Stuart, Matthias Bo

    2014-01-01

    in the main system. The real-time imaging capability is achieved using a synthetic aperture beamforming technique, utilizing the transmit events to generate a set of virtual elements that in combination can generate an image. The two core capabilities in combination is named Synthetic Aperture Sequential......This paper presents a new beamforming method for real-time three-dimensional (3-D) ultrasound imaging using a 2-D matrix transducer. To obtain images with sufficient resolution and contrast, several thousand elements are needed. The proposed method reduces the required channel count from...... Beamforming (SASB). Simulations are performed to evaluate the image quality of the presented method in comparison to Parallel beamforming utilizing 16 receive beamformers. As indicators for image quality the detail resolution and Cystic resolution are determined for a set of scatterers at a depth of 90mm...

  6. Real-time tracking for virtual environments using scaat kalman filtering and unsynchronised cameras

    DEFF Research Database (Denmark)

    Rasmussen, Niels Tjørnly; Störring, Morritz; Moeslund, Thomas B.

    2006-01-01

    This paper presents a real-time outside-in camera-based tracking system for wireless 3D pose tracking of a user’s head and hand in a virtual environment. The system uses four unsynchronised cameras as sensors and passive retroreflective markers arranged in rigid bodies as targets. In order to ach...

  7. Novel methods for real-time 3D facial recognition

    OpenAIRE

    Rodrigues, Marcos; Robinson, Alan

    2010-01-01

    In this paper we discuss our approach to real-time 3D face recognition. We argue the need for real time operation in a realistic scenario and highlight the required pre- and post-processing operations for effective 3D facial recognition. We focus attention to some operations including face and eye detection, and fast post-processing operations such as hole filling, mesh smoothing and noise removal. We consider strategies for hole filling such as bilinear and polynomial interpolation and Lapla...

  8. Virtual timers in hierarchical real-time systems

    NARCIS (Netherlands)

    Heuvel, van den M.M.H.P.; Holenderski, M.J.; Cools, W.A.; Bril, R.J.; Lukkien, J.J.; Zhu, D.

    2009-01-01

    Hierarchical scheduling frameworks (HSFs) provide means for composing complex real-time systems from welldefined subsystems. This paper describes an approach to provide hierarchically scheduled real-time applications with virtual event timers, motivated by the need for integrating priority

  9. LivePhantom: Retrieving Virtual World Light Data to Real Environments.

    Directory of Open Access Journals (Sweden)

    Hoshang Kolivand

    Full Text Available To achieve realistic Augmented Reality (AR, shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.

  10. Handheld real-time volumetric 3-D gamma-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Haefner, Andrew, E-mail: ahaefner@lbl.gov [Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Barnowski, Ross [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720 (United States); Luke, Paul; Amman, Mark [Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States); Vetter, Kai [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720 (United States); Lawrence Berkeley National Lab – Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720 (United States)

    2017-06-11

    This paper presents the concept of real-time fusion of gamma-ray imaging and visual scene data for a hand-held mobile Compton imaging system in 3-D. The ability to obtain and integrate both gamma-ray and scene data from a mobile platform enables improved capabilities in the localization and mapping of radioactive materials. This not only enhances the ability to localize these materials, but it also provides important contextual information of the scene which once acquired can be reviewed and further analyzed subsequently. To demonstrate these concepts, the high-efficiency multimode imager (HEMI) is used in a hand-portable implementation in combination with a Microsoft Kinect sensor. This sensor, in conjunction with open-source software, provides the ability to create a 3-D model of the scene and to track the position and orientation of HEMI in real-time. By combining the gamma-ray data and visual data, accurate 3-D maps of gamma-ray sources are produced in real-time. This approach is extended to map the location of radioactive materials within objects with unknown geometry.

  11. A real-time virtual delivery system for photon radiotherapy delivery monitoring

    Directory of Open Access Journals (Sweden)

    Feng Shi

    2014-03-01

    Full Text Available Purpose: Treatment delivery monitoring is important for radiotherapy, which enables catching dosimetric error at the earliest possible opportunity. This project develops a virtual delivery system to monitor the dose delivery process of photon radiotherapy in real-time using GPU-based Monte Carlo (MC method.Methods: The simulation process consists of 3 parallel CPU threads. A thread T1 is responsible for communication with a linac, which acquires a set of linac status parameters, e.g. gantry angles, MLC configurations, and beam MUs every 20 ms. Since linac vendors currently do not offer interface to acquire data in real time, we mimic this process by fetching information from a linac dynalog file at the set frequency. Instantaneous beam fluence map (FM is calculated based. A FM buffer is also created in T1 and the instantaneous FM is accumulated to it. This process continues, until a ready signal is received from thread T2 on which an in-house developed MC dose engine executes on GPU. At that moment, the accumulated FM is transferred to T2 for dose calculations, and the FM buffer in T1 is cleared. Once the dose calculation finishes, the resulting 3D dose distribution is directed to thread T3, which displays it in three orthogonal planes in color wash overlaid on the CT image. This process continues to monitor the 3D dose distribution in real-time.Results: An IMRT and a VMAT cases used in our patient-specific QA are studied. Maximum dose differences between our system and treatment planning system are 0.98% and 1.58% for the IMRT and VMAT cases, respectively. The update frequency is >10Hz and the relative uncertainty level is 2%.Conclusion: By embedding a GPU-based MC code in a novel data/work flow, it is possible to achieve real-time MC dose calculations to monitor delivery process.------------------------------Cite this article as: Shi F, Gu X, Graves YJ, Jiang S, Jia X. A real-time virtual delivery system for photon radiotherapy delivery

  12. 3D Elevation Program—Virtual USA in 3D

    Science.gov (United States)

    Lukas, Vicki; Stoker, J.M.

    2016-04-14

    The U.S. Geological Survey (USGS) 3D Elevation Program (3DEP) uses a laser system called ‘lidar’ (light detection and ranging) to create a virtual reality map of the Nation that is very accurate. 3D maps have many uses with new uses being discovered all the time.  

  13. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    -D City model is a very useful for various kinds of applications such as for planning in Navigation, Tourism, Disasters Management, Transportations, Municipality, Urban Environmental Managements and Real-estate industry. So the Construction of Virtual 3-D city models is a most interesting research topic in recent years.

  14. A 3D virtual reality simulator for training of minimally invasive surgery.

    Science.gov (United States)

    Mi, Shao-Hua; Hou, Zeng-Gunag; Yang, Fan; Xie, Xiao-Liang; Bian, Gui-Bin

    2014-01-01

    For the last decade, remarkable progress has been made in the field of cardiovascular disease treatment. However, these complex medical procedures require a combination of rich experience and technical skills. In this paper, a 3D virtual reality simulator for core skills training in minimally invasive surgery is presented. The system can generate realistic 3D vascular models segmented from patient datasets, including a beating heart, and provide a real-time computation of force and force feedback module for surgical simulation. Instruments, such as a catheter or guide wire, are represented by a multi-body mass-spring model. In addition, a realistic user interface with multiple windows and real-time 3D views are developed. Moreover, the simulator is also provided with a human-machine interaction module that gives doctors the sense of touch during the surgery training, enables them to control the motion of a virtual catheter/guide wire inside a complex vascular model. Experimental results show that the simulator is suitable for minimally invasive surgery training.

  15. Real-time 3D human capture system for mixed-reality art and entertainment.

    Science.gov (United States)

    Nguyen, Ta Huynh Duy; Qui, Tran Cong Thien; Xu, Ke; Cheok, Adrian David; Teo, Sze Lee; Zhou, ZhiYing; Mallawaarachchi, Asitha; Lee, Shang Ping; Liu, Wei; Teo, Hui Siang; Thang, Le Nam; Li, Yu; Kato, Hirokazu

    2005-01-01

    A real-time system for capturing humans in 3D and placing them into a mixed reality environment is presented in this paper. The subject is captured by nine cameras surrounding her. Looking through a head-mounted-display with a camera in front pointing at a marker, the user can see the 3D image of this subject overlaid onto a mixed reality scene. The 3D images of the subject viewed from this viewpoint are constructed using a robust and fast shape-from-silhouette algorithm. The paper also presents several techniques to produce good quality and speed up the whole system. The frame rate of our system is around 25 fps using only standard Intel processor-based personal computers. Besides a remote live 3D conferencing and collaborating system, we also describe an application of the system in art and entertainment, named Magic Land, which is a mixed reality environment where captured avatars of human and 3D computer generated virtual animations can form an interactive story and play with each other. This system demonstrates many technologies in human computer interaction: mixed reality, tangible interaction, and 3D communication. The result of the user study not only emphasizes the benefits, but also addresses some issues of these technologies.

  16. Virtual Reality Cerebral Aneurysm Clipping Simulation With Real-time Haptic Feedback

    Science.gov (United States)

    Alaraj, Ali; Luciano, Cristian J.; Bailey, Daniel P.; Elsenousi, Abdussalam; Roitberg, Ben Z.; Bernardo, Antonio; Banerjee, P. Pat; Charbel, Fady T.

    2014-01-01

    Background With the decrease in the number of cerebral aneurysms treated surgically and the increase of complexity of those treated surgically, there is a need for simulation-based tools to teach future neurosurgeons the operative techniques of aneurysm clipping. Objective To develop and evaluate the usefulness of a new haptic-based virtual reality (VR) simulator in the training of neurosurgical residents. Methods A real-time sensory haptic feedback virtual reality aneurysm clipping simulator was developed using the Immersive Touch platform. A prototype middle cerebral artery aneurysm simulation was created from a computed tomography angiogram. Aneurysm and vessel volume deformation and haptic feedback are provided in a 3-D immersive VR environment. Intraoperative aneurysm rupture was also simulated. Seventeen neurosurgery residents from three residency programs tested the simulator and provided feedback on its usefulness and resemblance to real aneurysm clipping surgery. Results Residents felt that the simulation would be useful in preparing for real-life surgery. About two thirds of the residents felt that the 3-D immersive anatomical details provided a very close resemblance to real operative anatomy and accurate guidance for deciding surgical approaches. They believed the simulation is useful for preoperative surgical rehearsal and neurosurgical training. One third of the residents felt that the technology in its current form provided very realistic haptic feedback for aneurysm surgery. Conclusion Neurosurgical residents felt that the novel immersive VR simulator is helpful in their training especially since they do not get a chance to perform aneurysm clippings until very late in their residency programs. PMID:25599200

  17. Planning and Management of Real-Time Geospatialuas Missions Within a Virtual Globe Environment

    Science.gov (United States)

    Nebiker, S.; Eugster, H.; Flückiger, K.; Christen, M.

    2011-09-01

    This paper presents the design and development of a hardware and software framework supporting all phases of typical monitoring and mapping missions with mini and micro UAVs (unmanned aerial vehicles). The developed solution combines state-of-the art collaborative virtual globe technologies with advanced geospatial imaging techniques and wireless data link technologies supporting the combined and highly reliable transmission of digital video, high-resolution still imagery and mission control data over extended operational ranges. The framework enables the planning, simulation, control and real-time monitoring of UAS missions in application areas such as monitoring of forest fires, agronomical research, border patrol or pipeline inspection. The geospatial components of the project are based on the Virtual Globe Technology i3D OpenWebGlobe of the Institute of Geomatics Engineering at the University of Applied Sciences Northwestern Switzerland (FHNW). i3D OpenWebGlobe is a high-performance 3D geovisualisation engine supporting the web-based streaming of very large amounts of terrain and POI data.

  18. Toward real-time virtual biopsy of oral lesions using confocal laser endomicroscopy interfaced with embedded computing.

    Science.gov (United States)

    Thong, Patricia S P; Tandjung, Stephanus S; Movania, Muhammad Mobeen; Chiew, Wei-Ming; Olivo, Malini; Bhuvaneswari, Ramaswamy; Seah, Hock-Soon; Lin, Feng; Qian, Kemao; Soo, Khee-Chee

    2012-05-01

    Oral lesions are conventionally diagnosed using white light endoscopy and histopathology. This can pose a challenge because the lesions may be difficult to visualise under white light illumination. Confocal laser endomicroscopy can be used for confocal fluorescence imaging of surface and subsurface cellular and tissue structures. To move toward real-time "virtual" biopsy of oral lesions, we interfaced an embedded computing system to a confocal laser endomicroscope to achieve a prototype three-dimensional (3-D) fluorescence imaging system. A field-programmable gated array computing platform was programmed to enable synchronization of cross-sectional image grabbing and Z-depth scanning, automate the acquisition of confocal image stacks and perform volume rendering. Fluorescence imaging of the human and murine oral cavities was carried out using the fluorescent dyes fluorescein sodium and hypericin. Volume rendering of cellular and tissue structures from the oral cavity demonstrate the potential of the system for 3-D fluorescence visualization of the oral cavity in real-time. We aim toward achieving a real-time virtual biopsy technique that can complement current diagnostic techniques and aid in targeted biopsy for better clinical outcomes.

  19. Real time determination of dose radiation through artificial intelligence and virtual reality

    International Nuclear Information System (INIS)

    Freitas, Victor G.G.; Mol, Antonio C.A.; Pereira, Claudio M.N.A.

    2009-01-01

    In the last years, a virtual environment of Argonauta research reactor, sited in the Instituto de Engenharia Nuclear (Brazil), has been developed. Such environment, called here Argonauta Virtual (AV), is a 3D model of the reactor hall, in which virtual people (avatar) can navigate. In AV, simulations of nuclear sources and doses are possible. In a recent work, a real time monitoring system (RTMS) was developed to provide (by means of Ethernet TCP/IP) the information of area detectors situated in the reactor hall. Extending the scope of AV, this work is intended to provide a continuous determination of gamma radiation dose in the reactor hall, based in several monitored parameters. To accomplish that a module based in artificial neural network (ANN) was developed. The ANN module is able to predict gamma radiation doses using as inputs: the avatar position (from virtual environment), the reactor power (from RTMS) and information of fixed area detectors (from RTMS). The ANN training data has been obtained by measurements of gamma radiation doses in a mesh of points, with previously defined positions, for different power levels. Through the use of ANN it is possible to estimate, in real time, the dose received by a person at any position in Argonauta reactor hall. Such approach allows tasks simulations and training of people inside the AV system, without exposing them to radiation effects. (author)

  20. Real time determination of dose radiation through artificial intelligence and virtual reality

    International Nuclear Information System (INIS)

    Freitas, Victor Goncalves Gloria

    2009-01-01

    In the last years, a virtual environment of Argonauta research reactor, sited in the Instituto de Engenharia Nuclear (Brazil), has been developed. Such environment, called here Argonauta Virtual (AV), is a 3D model of the reactor hall, in which virtual people (avatar) can navigate. In AV, simulations of nuclear sources and doses are possible. In a recent work, a real time monitoring system (RTMS) was developed to provide (by means of Ethernet TCP/I P) the information of area detectors situated in the reactor hall. Extending the scope of AV, this work is intended to provide a continuous determination of gamma radiation dose in the reactor hall, based in several monitored parameters. To accomplish that a module based in artificial neural network (ANN) was developed. The ANN module is able to predict gamma radiation doses using as inputs: the avatar position (from virtual environment), the reactor power (from RTMS) and information of fixed area detectors (from RTMS). The ANN training data has been obtained by measurements of gamma radiation doses in a mesh of points, with previously defined positions, for different power levels. Through the use of ANN it is possible to estimate, in real time, the dose received by a person at any position in Argonauta reactor hall. Such approach allows tasks simulations and training of people inside the AV system, without exposing them to radiation effects. (author)

  1. VR-Planets : a 3D immersive application for real-time flythrough images of planetary surfaces

    Science.gov (United States)

    Civet, François; Le Mouélic, Stéphane

    2015-04-01

    During the last two decades, a fleet of planetary probes has acquired several hundred gigabytes of images of planetary surfaces. Mars has been particularly well covered thanks to the Mars Global Surveyor, Mars Express and Mars Reconnaissance Orbiter spacecrafts. HRSC, CTX, HiRISE instruments allowed the computation of Digital Elevation Models with a resolution from hundreds of meters up to 1 meter per pixel, and corresponding orthoimages with a resolution from few hundred of meters up to 25 centimeters per pixel. The integration of such huge data sets into a system allowing user-friendly manipulation either for scientific investigation or for public outreach can represent a real challenge. We are investigating how innovative tools can be used to freely fly over reconstructed landscapes in real time, using technologies derived from the game industry and virtual reality. We have developed an application based on a game engine, using planetary data, to immerse users in real martian landscapes. The user can freely navigate in each scene at full spatial resolution using a game controller. The actual rendering is compatible with several visualization devices such as 3D active screen, virtual reality headsets (Oculus Rift), and android devices.

  2. Real-Time 3D Profile Measurement Using Structured Light

    International Nuclear Information System (INIS)

    Xu, L; Zhang, Z J; Ma, H; Yu, Y J

    2006-01-01

    The paper builds a real-time system of 3D profile measurement using structured-light imaging. It allows a hand-held object to rotate free in the space-time coded light field, which is projected by the projector. The surface of measured objects with projected coded light is imaged; the system shows surface reconstruction results of objects online. This feedback helps user to adjust object's pose in the light field according to the dismissed or error data, which would achieve the integrality of data used in reconstruction. This method can acquire denser data cloud and have higher reconstruction accuracy and efficiency. According to the real-time requirements, the paper presents the non-restricted light plane modelling which suits stripe structured light system, designs the three-frame stripes space-time coded pattern, and uses the advance ICP algorithms to acquire 3D data alignment from multiple view

  3. A Flattened Hierarchical Scheduler for Real-Time Virtual Machines

    OpenAIRE

    Drescher, Michael Stuart

    2015-01-01

    The recent trend of migrating legacy computer systems to a virtualized, cloud-based environment has expanded to real-time systems. Unfortunately, modern hypervisors have no mechanism in place to guarantee the real-time performance of applications running on virtual machines. Past solutions to this problem rely on either spatial or temporal resource partitioning, both of which under-utilize the processing capacity of the host system. Paravirtualized solutions in which the guest communicates it...

  4. Computer Tool for Automatically Generated 3D Illustration in Real Time from Archaeological Scanned Pieces

    Directory of Open Access Journals (Sweden)

    Luis López

    2012-11-01

    Full Text Available The graphical documentation process of archaeological pieces requires the active involvement of a professional artist to recreate beautiful illustrations using a wide variety of expressive techniques. Frequently, the artist’s work is limited by the inconvenience of working only with the photographs of the pieces he is going to illustrate. This paper presents a software tool that allows the easy generation of illustrations in real time from 3D scanned models. The developed interface allows the user to simulate very elaborate artistic styles through the creation of diagrams by using the available virtual lights. The software processes the diagrams to render an illustration from any given angle or position. Among the available virtual lights, there are well known techniques as silhouettes enhancement, hatching or toon shading.

  5. Augmented Reality versus Virtual Reality for 3D Object Manipulation.

    Science.gov (United States)

    Krichenbauer, Max; Yamamoto, Goshiro; Taketom, Takafumi; Sandor, Christian; Kato, Hirokazu

    2018-02-01

    Virtual Reality (VR) Head-Mounted Displays (HMDs) are on the verge of becoming commodity hardware available to the average user and feasible to use as a tool for 3D work. Some HMDs include front-facing cameras, enabling Augmented Reality (AR) functionality. Apart from avoiding collisions with the environment, interaction with virtual objects may also be affected by seeing the real environment. However, whether these effects are positive or negative has not yet been studied extensively. For most tasks it is unknown whether AR has any advantage over VR. In this work we present the results of a user study in which we compared user performance measured in task completion time on a 9 degrees of freedom object selection and transformation task performed either in AR or VR, both with a 3D input device and a mouse. Our results show faster task completion time in AR over VR. When using a 3D input device, a purely VR environment increased task completion time by 22.5 percent on average compared to AR ( ). Surprisingly, a similar effect occurred when using a mouse: users were about 17.3 percent slower in VR than in AR ( ). Mouse and 3D input device produced similar task completion times in each condition (AR or VR) respectively. We further found no differences in reported comfort.

  6. An Indoor Scene Recognition-Based 3D Registration Mechanism for Real-Time AR-GIS Visualization in Mobile Applications

    Directory of Open Access Journals (Sweden)

    Wei Ma

    2018-03-01

    Full Text Available Mobile Augmented Reality (MAR systems are becoming ideal platforms for visualization, permitting users to better comprehend and interact with spatial information. Subsequently, this technological development, in turn, has prompted efforts to enhance mechanisms for registering virtual objects in real world contexts. Most existing AR 3D Registration techniques lack the scene recognition capabilities needed to describe accurately the positioning of virtual objects in scenes representing reality. Moreover, the application of such registration methods in indoor AR-GIS systems is further impeded by the limited capacity of these systems to detect the geometry and semantic information in indoor environments. In this paper, we propose a novel method for fusing virtual objects and indoor scenes, based on indoor scene recognition technology. To accomplish scene fusion in AR-GIS, we first detect key points in reference images. Then, we perform interior layout extraction using a Fully Connected Networks (FCN algorithm to acquire layout coordinate points for the tracking targets. We detect and recognize the target scene in a video frame image to track targets and estimate the camera pose. In this method, virtual 3D objects are fused precisely to a real scene, according to the camera pose and the previously extracted layout coordinate points. Our results demonstrate that this approach enables accurate fusion of virtual objects with representations of real world indoor environments. Based on this fusion technique, users can better grasp virtual three-dimensional representations on an AR-GIS platform.

  7. Towards real-time 3D ultrasound planning and personalized 3D printing for breast HDR brachytherapy treatment

    International Nuclear Information System (INIS)

    Poulin, Eric; Gardi, Lori; Fenster, Aaron; Pouliot, Jean; Beaulieu, Luc

    2015-01-01

    Two different end-to-end procedures were tested for real-time planning in breast HDR brachytherapy treatment. Both methods are using a 3D ultrasound (3DUS) system and a freehand catheter optimization algorithm. They were found fast and efficient. We demonstrated a proof-of-concept approach for personalized real-time guidance and planning to breast HDR brachytherapy treatments

  8. Monitoring tumor motion by real time 2D/3D registration during radiotherapy.

    Science.gov (United States)

    Gendrin, Christelle; Furtado, Hugo; Weber, Christoph; Bloch, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Bergmann, Helmar; Stock, Markus; Fichtinger, Gabor; Georg, Dietmar; Birkfellner, Wolfgang

    2012-02-01

    In this paper, we investigate the possibility to use X-ray based real time 2D/3D registration for non-invasive tumor motion monitoring during radiotherapy. The 2D/3D registration scheme is implemented using general purpose computation on graphics hardware (GPGPU) programming techniques and several algorithmic refinements in the registration process. Validation is conducted off-line using a phantom and five clinical patient data sets. The registration is performed on a region of interest (ROI) centered around the planned target volume (PTV). The phantom motion is measured with an rms error of 2.56 mm. For the patient data sets, a sinusoidal movement that clearly correlates to the breathing cycle is shown. Videos show a good match between X-ray and digitally reconstructed radiographs (DRR) displacement. Mean registration time is 0.5 s. We have demonstrated that real-time organ motion monitoring using image based markerless registration is feasible. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  9. Real-time tracking with a 3D-flow processor array

    International Nuclear Information System (INIS)

    Crosetto, D.

    1993-01-01

    The problem of real-time track-finding has been performed to date with CAM (Content Addressable Memories) or with fast coincidence logic, because the processing scheme was though to have much slower performance. Advances in technology together with a new architectural approach make it feasible to also explore the computing technique for real-time track finding thus giving the advantages of implementing algorithms that can find more parameters such as calculate the sagitta, curvature, pt, etc. with respect to the CAM approach. This report describes real-time track finding using a new computing approach technique based on the 3D-flow array processor system. This system consists of a fixed interconnection architexture scheme, allowing flexible algorithm implementation on a scalable platform. The 3D-Flow parallel processing system for track finding is scalable in size and performance by either increasing the number of processors, or increasing the speed or else the number of pipelined stages. The present article describes the conceptual idea and the design stage of the project

  10. Real-time tracking with a 3D-Flow processor array

    International Nuclear Information System (INIS)

    Crosetto, D.

    1993-06-01

    The problem of real-time track-finding has been performed to date with CAM (Content Addressable Memories) or with fast coincidence logic, because the processing scheme was thought to have much slower performance. Advances in technology together with a new architectural approach make it feasible to also explore the computing technique for real-time track finding thus giving the advantages of implementing algorithms that can find more parameters such as calculate the sagitta, curvature, pt, etc., with respect to the CAM approach. The report describes real-time track finding using new computing approach technique based on the 3D-Flow array processor system. This system consists of a fixed interconnection architecture scheme, allowing flexible algorithm implementation on a scalable platform. The 3D-Flow parallel processing system for track finding is scalable in size and performance by either increasing the number of processors, or increasing the speed or else the number of pipelined stages. The present article describes the conceptual idea and the design stage of the project

  11. The 3D virtual environment online for real shopping

    OpenAIRE

    Khalil, Nahla

    2015-01-01

    The development of information technology and Internet has led to rapidly progressed in e-commerce and online shopping, due to the convenience that they provide consumers. E-commerce and online shopping are still not able to fully replace onsite shopping. In contrast, conventional online shopping websites often cannot provide enough information about a product for the customer to make an informed decision before checkout. 3D virtual shopping environment show great potential for enhancing e-co...

  12. Real-time 3-D space numerical shake prediction for earthquake early warning

    Science.gov (United States)

    Wang, Tianyun; Jin, Xing; Huang, Yandan; Wei, Yongxiang

    2017-12-01

    In earthquake early warning systems, real-time shake prediction through wave propagation simulation is a promising approach. Compared with traditional methods, it does not suffer from the inaccurate estimation of source parameters. For computation efficiency, wave direction is assumed to propagate on the 2-D surface of the earth in these methods. In fact, since the seismic wave propagates in the 3-D sphere of the earth, the 2-D space modeling of wave direction results in inaccurate wave estimation. In this paper, we propose a 3-D space numerical shake prediction method, which simulates the wave propagation in 3-D space using radiative transfer theory, and incorporate data assimilation technique to estimate the distribution of wave energy. 2011 Tohoku earthquake is studied as an example to show the validity of the proposed model. 2-D space model and 3-D space model are compared in this article, and the prediction results show that numerical shake prediction based on 3-D space model can estimate the real-time ground motion precisely, and overprediction is alleviated when using 3-D space model.

  13. Real Time 3D Facial Movement Tracking Using a Monocular Camera

    Directory of Open Access Journals (Sweden)

    Yanchao Dong

    2016-07-01

    Full Text Available The paper proposes a robust framework for 3D facial movement tracking in real time using a monocular camera. It is designed to estimate the 3D face pose and local facial animation such as eyelid movement and mouth movement. The framework firstly utilizes the Discriminative Shape Regression method to locate the facial feature points on the 2D image and fuses the 2D data with a 3D face model using Extended Kalman Filter to yield 3D facial movement information. An alternating optimizing strategy is adopted to fit to different persons automatically. Experiments show that the proposed framework could track the 3D facial movement across various poses and illumination conditions. Given the real face scale the framework could track the eyelid with an error of 1 mm and mouth with an error of 2 mm. The tracking result is reliable for expression analysis or mental state inference.

  14. Real-time 3D-surface-guided head refixation useful for fractionated stereotactic radiotherapy

    International Nuclear Information System (INIS)

    Li Shidong; Liu Dezhi; Yin Gongjie; Zhuang Ping; Geng, Jason

    2006-01-01

    Accurate and precise head refixation in fractionated stereotactic radiotherapy has been achieved through alignment of real-time 3D-surface images with a reference surface image. The reference surface image is either a 3D optical surface image taken at simulation with the desired treatment position, or a CT/MRI-surface rendering in the treatment plan with corrections for patient motion during CT/MRI scans and partial volume effects. The real-time 3D surface images are rapidly captured by using a 3D video camera mounted on the ceiling of the treatment vault. Any facial expression such as mouth opening that affects surface shape and location can be avoided using a new facial monitoring technique. The image artifacts on the real-time surface can generally be removed by setting a threshold of jumps at the neighboring points while preserving detailed features of the surface of interest. Such a real-time surface image, registered in the treatment machine coordinate system, provides a reliable representation of the patient head position during the treatment. A fast automatic alignment between the real-time surface and the reference surface using a modified iterative-closest-point method leads to an efficient and robust surface-guided target refixation. Experimental and clinical results demonstrate the excellent efficacy of <2 min set-up time, the desired accuracy and precision of <1 mm in isocenter shifts, and <1 deg. in rotation

  15. 2D and 3D Traveling Salesman Problem

    Science.gov (United States)

    Haxhimusa, Yll; Carpenter, Edward; Catrambone, Joseph; Foldes, David; Stefanov, Emil; Arns, Laura; Pizlo, Zygmunt

    2011-01-01

    When a two-dimensional (2D) traveling salesman problem (TSP) is presented on a computer screen, human subjects can produce near-optimal tours in linear time. In this study we tested human performance on a real and virtual floor, as well as in a three-dimensional (3D) virtual space. Human performance on the real floor is as good as that on a…

  16. A multi-frequency electrical impedance tomography system for real-time 2D and 3D imaging

    Science.gov (United States)

    Yang, Yunjie; Jia, Jiabin

    2017-08-01

    This paper presents the design and evaluation of a configurable, fast multi-frequency Electrical Impedance Tomography (mfEIT) system for real-time 2D and 3D imaging, particularly for biomedical imaging. The system integrates 32 electrode interfaces and the current frequency ranges from 10 kHz to 1 MHz. The system incorporates the following novel features. First, a fully adjustable multi-frequency current source with current monitoring function is designed. Second, a flexible switching scheme is developed for arbitrary sensing configuration and a semi-parallel data acquisition architecture is implemented for high-frame-rate data acquisition. Furthermore, multi-frequency digital quadrature demodulation is accomplished in a high-capacity Field Programmable Gate Array. At last, a 3D imaging software, visual tomography, is developed for real-time 2D and 3D image reconstruction, data analysis, and visualization. The mfEIT system is systematically tested and evaluated from the aspects of signal to noise ratio (SNR), frame rate, and 2D and 3D multi-frequency phantom imaging. The highest SNR is 82.82 dB on a 16-electrode sensor. The frame rate is up to 546 fps at serial mode and 1014 fps at semi-parallel mode. The evaluation results indicate that the presented mfEIT system is a powerful tool for real-time 2D and 3D imaging.

  17. Real time determination of dose radiation through artificial intelligence and virtual reality; Determinacao de dose de radiacao, em tempo real, atraves de inteligencia artificial e realidade virtual

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, Victor Goncalves Gloria

    2009-07-01

    In the last years, a virtual environment of Argonauta research reactor, sited in the Instituto de Engenharia Nuclear (Brazil), has been developed. Such environment, called here Argonauta Virtual (AV), is a 3D model of the reactor hall, in which virtual people (avatar) can navigate. In AV, simulations of nuclear sources and doses are possible. In a recent work, a real time monitoring system (RTMS) was developed to provide (by means of Ethernet TCP/I P) the information of area detectors situated in the reactor hall. Extending the scope of AV, this work is intended to provide a continuous determination of gamma radiation dose in the reactor hall, based in several monitored parameters. To accomplish that a module based in artificial neural network (ANN) was developed. The ANN module is able to predict gamma radiation doses using as inputs: the avatar position (from virtual environment), the reactor power (from RTMS) and information of fixed area detectors (from RTMS). The ANN training data has been obtained by measurements of gamma radiation doses in a mesh of points, with previously defined positions, for different power levels. Through the use of ANN it is possible to estimate, in real time, the dose received by a person at any position in Argonauta reactor hall. Such approach allows tasks simulations and training of people inside the AV system, without exposing them to radiation effects. (author)

  18. Finite Element Methods for real-time Haptic Feedback of Soft-Tissue Models in Virtual Reality Simulators

    Science.gov (United States)

    Frank, Andreas O.; Twombly, I. Alexander; Barth, Timothy J.; Smith, Jeffrey D.; Dalton, Bonnie P. (Technical Monitor)

    2001-01-01

    We have applied the linear elastic finite element method to compute haptic force feedback and domain deformations of soft tissue models for use in virtual reality simulators. Our results show that, for virtual object models of high-resolution 3D data (>10,000 nodes), haptic real time computations (>500 Hz) are not currently possible using traditional methods. Current research efforts are focused in the following areas: 1) efficient implementation of fully adaptive multi-resolution methods and 2) multi-resolution methods with specialized basis functions to capture the singularity at the haptic interface (point loading). To achieve real time computations, we propose parallel processing of a Jacobi preconditioned conjugate gradient method applied to a reduced system of equations resulting from surface domain decomposition. This can effectively be achieved using reconfigurable computing systems such as field programmable gate arrays (FPGA), thereby providing a flexible solution that allows for new FPGA implementations as improved algorithms become available. The resulting soft tissue simulation system would meet NASA Virtual Glovebox requirements and, at the same time, provide a generalized simulation engine for any immersive environment application, such as biomedical/surgical procedures or interactive scientific applications.

  19. Real-time microscopic 3D shape measurement based on optimized pulse-width-modulation binary fringe projection

    Science.gov (United States)

    Hu, Yan; Chen, Qian; Feng, Shijie; Tao, Tianyang; Li, Hui; Zuo, Chao

    2017-07-01

    In recent years, tremendous progress has been made in 3D measurement techniques, contributing to the realization of faster and more accurate 3D measurement. As a representative of these techniques, fringe projection profilometry (FPP) has become a commonly used method for real-time 3D measurement, such as real-time quality control and online inspection. To date, most related research has been concerned with macroscopic 3D measurement, but microscopic 3D measurement, especially real-time microscopic 3D measurement, is rarely reported. However, microscopic 3D measurement plays an important role in 3D metrology and is indispensable in some applications in measuring micro scale objects like the accurate metrology of MEMS components of the final devices to ensure their proper performance. In this paper, we proposed a method which effectively combines optimized binary structured patterns with a number-theoretical phase unwrapping algorithm to realize real-time microscopic 3D measurement. A slight defocusing of our optimized binary patterns can considerably alleviate the measurement error based on four-step phase-shifting FPP, providing the binary patterns with a comparable performance to ideal sinusoidal patterns. The static measurement accuracy can reach 8 μm, and the experimental results of a vibrating earphone diaphragm reveal that our system can successfully realize real-time 3D measurement of 120 frames per second (FPS) with a measurement range of 8~\\text{mm}× 6~\\text{mm} in lateral and 8 mm in depth.

  20. REAL-TIME CAMERA GUIDANCE FOR 3D SCENE RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    F. Schindler

    2012-07-01

    Full Text Available We propose a framework for operator guidance during the image acquisition process for reliable multi-view stereo reconstruction. Goal is to achieve full coverage of the object and sufficient overlap. Multi-view stereo is a commonly used method to reconstruct both camera trajectory and 3D object shape. After determining an initial solution, a globally optimal reconstruction is usually obtained by executing a bundle adjustment involving all images. Acquiring suitable images, however, still requires an experienced operator to ensure accuracy and completeness of the final solution. We propose an interactive framework for guiding unexperienced users or possibly an autonomous robot. Using approximate camera orientations and object points we estimate point uncertainties within a sliding bundle adjustment and suggest appropriate camera movements. A visual feedback system communicates the decisions to the user in an intuitive way. We demonstrate the suitability of our system with a virtual image acquisition simulation as well as in real-world scenarios. We show that when following the camera movements suggested by our system, the proposed framework is able to generate good approximate values for the bundle adjustment, leading to accurate results compared to ground truth after few iterations. Possible applications are non-professional 3D acquisition systems on low-cost platforms like mobile phones, autonomously navigating robots as well as online flight planning of unmanned aerial vehicles.

  1. Virtual reality cerebral aneurysm clipping simulation with real-time haptic feedback.

    Science.gov (United States)

    Alaraj, Ali; Luciano, Cristian J; Bailey, Daniel P; Elsenousi, Abdussalam; Roitberg, Ben Z; Bernardo, Antonio; Banerjee, P Pat; Charbel, Fady T

    2015-03-01

    With the decrease in the number of cerebral aneurysms treated surgically and the increase of complexity of those treated surgically, there is a need for simulation-based tools to teach future neurosurgeons the operative techniques of aneurysm clipping. To develop and evaluate the usefulness of a new haptic-based virtual reality simulator in the training of neurosurgical residents. A real-time sensory haptic feedback virtual reality aneurysm clipping simulator was developed using the ImmersiveTouch platform. A prototype middle cerebral artery aneurysm simulation was created from a computed tomographic angiogram. Aneurysm and vessel volume deformation and haptic feedback are provided in a 3-dimensional immersive virtual reality environment. Intraoperative aneurysm rupture was also simulated. Seventeen neurosurgery residents from 3 residency programs tested the simulator and provided feedback on its usefulness and resemblance to real aneurysm clipping surgery. Residents thought that the simulation would be useful in preparing for real-life surgery. About two-thirds of the residents thought that the 3-dimensional immersive anatomic details provided a close resemblance to real operative anatomy and accurate guidance for deciding surgical approaches. They thought the simulation was useful for preoperative surgical rehearsal and neurosurgical training. A third of the residents thought that the technology in its current form provided realistic haptic feedback for aneurysm surgery. Neurosurgical residents thought that the novel immersive VR simulator is helpful in their training, especially because they do not get a chance to perform aneurysm clippings until late in their residency programs.

  2. Vision-based overlay of a virtual object into real scene for designing room interior

    Science.gov (United States)

    Harasaki, Shunsuke; Saito, Hideo

    2001-10-01

    In this paper, we introduce a geometric registration method for augmented reality (AR) and an application system, interior simulator, in which a virtual (CG) object can be overlaid into a real world space. Interior simulator is developed as an example of an AR application of the proposed method. Using interior simulator, users can visually simulate the location of virtual furniture and articles in the living room so that they can easily design the living room interior without placing real furniture and articles, by viewing from many different locations and orientations in real-time. In our system, two base images of a real world space are captured from two different views for defining a projective coordinate of object 3D space. Then each projective view of a virtual object in the base images are registered interactively. After such coordinate determination, an image sequence of a real world space is captured by hand-held camera with tracking non-metric measured feature points for overlaying a virtual object. Virtual objects can be overlaid onto the image sequence by taking each relationship between the images. With the proposed system, 3D position tracking device, such as magnetic trackers, are not required for the overlay of virtual objects. Experimental results demonstrate that 3D virtual furniture can be overlaid into an image sequence of the scene of a living room nearly at video rate (20 frames per second).

  3. Evaluation of the cognitive effects of travel technique in complex real and virtual environments.

    Science.gov (United States)

    Suma, Evan A; Finkelstein, Samantha L; Reid, Myra; V Babu, Sabarish; Ulinski, Amy C; Hodges, Larry F

    2010-01-01

    We report a series of experiments conducted to investigate the effects of travel technique on information gathering and cognition in complex virtual environments. In the first experiment, participants completed a non-branching multilevel 3D maze at their own pace using either real walking or one of two virtual travel techniques. In the second experiment, we constructed a real-world maze with branching pathways and modeled an identical virtual environment. Participants explored either the real or virtual maze for a predetermined amount of time using real walking or a virtual travel technique. Our results across experiments suggest that for complex environments requiring a large number of turns, virtual travel is an acceptable substitute for real walking if the goal of the application involves learning or reasoning based on information presented in the virtual world. However, for applications that require fast, efficient navigation or travel that closely resembles real-world behavior, real walking has advantages over common joystick-based virtual travel techniques.

  4. Evaluation of two 3D virtual computer reconstructions for comparison of cleft lip and palate to normal fetal microanatomy.

    Science.gov (United States)

    Landes, Constantin A; Weichert, Frank; Geis, Philipp; Helga, Fritsch; Wagner, Mathias

    2006-03-01

    Cleft lip and palate reconstructive surgery requires thorough knowledge of normal and pathological labial, palatal, and velopharyngeal anatomy. This study compared two software algorithms and their 3D virtual anatomical reconstruction because exact 3D micromorphological reconstruction may improve learning, reveal spatial relationships, and provide data for mathematical modeling. Transverse and frontal serial sections of the midface of 18 fetal specimens (11th to 32nd gestational week) were used for two manual segmentation approaches. The first manual segmentation approach used bitmap images and either Windows-based or Mac-based SURFdriver commercial software that allowed manual contour matching, surface generation with average slice thickness, 3D triangulation, and real-time interactive virtual 3D reconstruction viewing. The second manual segmentation approach used tagged image format and platform-independent prototypical SeViSe software developed by one of the authors (F.W.). Distended or compressed structures were dynamically transformed. Registration was automatic but allowed manual correction, such as individual section thickness, surface generation, and interactive virtual 3D real-time viewing. SURFdriver permitted intuitive segmentation, easy manual offset correction, and the reconstruction showed complex spatial relationships in real time. However, frequent software crashes and erroneous landmarks appearing "out of the blue," requiring manual correction, were tedious. Individual section thickness, defined smoothing, and unlimited structure number could not be integrated. The reconstruction remained underdimensioned and not sufficiently accurate for this study's reconstruction problem. SeViSe permitted unlimited structure number, late addition of extra sections, and quantified smoothing and individual slice thickness; however, SeViSe required more elaborate work-up compared to SURFdriver, yet detailed and exact 3D reconstructions were created.

  5. Novel real-time 3D radiological mapping solution for ALARA maximization, D and D assessments and radiological management

    Energy Technology Data Exchange (ETDEWEB)

    Dubart, Philippe; Hautot, Felix [AREVA Group, 1 route de la Noue, Gif sur Yvette (France); Morichi, Massimo; Abou-Khalil, Roger [AREVA Group, Tour AREVA-1, place Jean Millier, Paris (France)

    2015-07-01

    Good management of dismantling and decontamination (D and D) operations and activities is requiring safety, time saving and perfect radiological knowledge of the contaminated environment as well as optimization for personnel dose and minimization of waste volume. In the same time, Fukushima accident has imposed a stretch to the nuclear measurement operational approach requiring in such emergency situation: fast deployment and intervention, quick analysis and fast scenario definition. AREVA, as return of experience from his activities carried out at Fukushima and D and D sites has developed a novel multi-sensor solution as part of his D and D research, approach and method, a system with real-time 3D photo-realistic spatial radiation distribution cartography of contaminated premises. The system may be hand-held or mounted on a mobile device (robot, drone, e.g). In this paper, we will present our current development based on a SLAM technology (Simultaneous Localization And Mapping) and integrated sensors and detectors allowing simultaneous topographic and radiological (dose rate and/or spectroscopy) data acquisitions. This enabling technology permits 3D gamma activity cartography in real-time. (authors)

  6. Novel real-time 3D radiological mapping solution for ALARA maximization, D and D assessments and radiological management

    International Nuclear Information System (INIS)

    Dubart, Philippe; Hautot, Felix; Morichi, Massimo; Abou-Khalil, Roger

    2015-01-01

    Good management of dismantling and decontamination (D and D) operations and activities is requiring safety, time saving and perfect radiological knowledge of the contaminated environment as well as optimization for personnel dose and minimization of waste volume. In the same time, Fukushima accident has imposed a stretch to the nuclear measurement operational approach requiring in such emergency situation: fast deployment and intervention, quick analysis and fast scenario definition. AREVA, as return of experience from his activities carried out at Fukushima and D and D sites has developed a novel multi-sensor solution as part of his D and D research, approach and method, a system with real-time 3D photo-realistic spatial radiation distribution cartography of contaminated premises. The system may be hand-held or mounted on a mobile device (robot, drone, e.g). In this paper, we will present our current development based on a SLAM technology (Simultaneous Localization And Mapping) and integrated sensors and detectors allowing simultaneous topographic and radiological (dose rate and/or spectroscopy) data acquisitions. This enabling technology permits 3D gamma activity cartography in real-time. (authors)

  7. A real-time 3D scanning system for pavement distortion inspection

    International Nuclear Information System (INIS)

    Li, Qingguang; Yao, Ming; Yao, Xun; Xu, Bugao

    2010-01-01

    Pavement distortions, such as rutting and shoving, are the common pavement distress problems that need to be inspected and repaired in a timely manner to ensure ride quality and traffic safety. This paper introduces a real-time, low-cost inspection system devoted to detecting these distress features using high-speed 3D transverse scanning techniques. The detection principle is the dynamic generation and characterization of the 3D pavement profile based on structured light triangulation. To improve the accuracy of the system, a multi-view coplanar scheme is employed in the calibration procedure so that more feature points can be used and distributed across the field of view of the camera. A sub-pixel line extraction method is applied for the laser stripe location, which includes filtering, edge detection and spline interpolation. The pavement transverse profile is then generated from the laser stripe curve and approximated by line segments. The second-order derivatives of the segment endpoints are used to identify the feature points of possible distortions. The system can output the real-time measurements and 3D visualization of rutting and shoving distress in a scanned pavement

  8. A Comparison of Iterative 2D-3D Pose Estimation Methods for Real-Time Applications

    DEFF Research Database (Denmark)

    Grest, Daniel; Krüger, Volker; Petersen, Thomas

    2009-01-01

    This work compares iterative 2D-3D Pose Estimation methods for use in real-time applications. The compared methods are available for public as C++ code. One method is part of the openCV library, namely POSIT. Because POSIT is not applicable for planar 3Dpoint congurations, we include the planar P...

  9. Perceptual Real-Time 2D-to-3D Conversion Using Cue Fusion.

    Science.gov (United States)

    Leimkuhler, Thomas; Kellnhofer, Petr; Ritschel, Tobias; Myszkowski, Karol; Seidel, Hans-Peter

    2018-06-01

    We propose a system to infer binocular disparity from a monocular video stream in real-time. Different from classic reconstruction of physical depth in computer vision, we compute perceptually plausible disparity, that is numerically inaccurate, but results in a very similar overall depth impression with plausible overall layout, sharp edges, fine details and agreement between luminance and disparity. We use several simple monocular cues to estimate disparity maps and confidence maps of low spatial and temporal resolution in real-time. These are complemented by spatially-varying, appearance-dependent and class-specific disparity prior maps, learned from example stereo images. Scene classification selects this prior at runtime. Fusion of prior and cues is done by means of robust MAP inference on a dense spatio-temporal conditional random field with high spatial and temporal resolution. Using normal distributions allows this in constant-time, parallel per-pixel work. We compare our approach to previous 2D-to-3D conversion systems in terms of different metrics, as well as a user study and validate our notion of perceptually plausible disparity.

  10. Real-time Stereoscopic 3D for E-Robotics Learning

    Directory of Open Access Journals (Sweden)

    Richard Y. Chiou

    2011-02-01

    Full Text Available Following the design and testing of a successful 3-Dimensional surveillance system, this 3D scheme has been implemented into online robotics learning at Drexel University. A real-time application, utilizing robot controllers, programmable logic controllers and sensors, has been developed in the “MET 205 Robotics and Mechatronics” class to provide the students with a better robotic education. The integration of the 3D system allows the students to precisely program the robot and execute functions remotely. Upon the students’ recommendation, polarization has been chosen to be the main platform behind the 3D robotic system. Stereoscopic calculations are carried out for calibration purposes to display the images with the highest possible comfort-level and 3D effect. The calculations are further validated by comparing the results with students’ evaluations. Due to the Internet-based feature, multiple clients have the opportunity to perform the online automation development. In the future, students, in different universities, will be able to cross-control robotic components of different types around the world. With the development of this 3D ERobotics interface, automation resources and robotic learning can be shared and enriched regardless of location.

  11. Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    Science.gov (United States)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.

  12. 3D vision in a virtual reality robotics environment

    Science.gov (United States)

    Schutz, Christian L.; Natonek, Emerico; Baur, Charles; Hugli, Heinz

    1996-12-01

    Virtual reality robotics (VRR) needs sensing feedback from the real environment. To show how advanced 3D vision provides new perspectives to fulfill these needs, this paper presents an architecture and system that integrates hybrid 3D vision and VRR and reports about experiments and results. The first section discusses the advantages of virtual reality in robotics, the potential of a 3D vision system in VRR and the contribution of a knowledge database, robust control and the combination of intensity and range imaging to build such a system. Section two presents the different modules of a hybrid 3D vision architecture based on hypothesis generation and verification. Section three addresses the problem of the recognition of complex, free- form 3D objects and shows how and why the newer approaches based on geometric matching solve the problem. This free- form matching can be efficiently integrated in a VRR system as a hypothesis generation knowledge-based 3D vision system. In the fourth part, we introduce the hypothesis verification based on intensity images which checks object pose and texture. Finally, we show how this system has been implemented and operates in a practical VRR environment used for an assembly task.

  13. 3D Interactions between Virtual Worlds and Real Life in an E-Learning Community

    Directory of Open Access Journals (Sweden)

    Ulrike Lucke

    2011-01-01

    Full Text Available Virtual worlds became an appealing and fascinating component of today's internet. In particular, the number of educational providers that see a potential for E-Learning in such new platforms increases. Unfortunately, most of the environments and processes implemented up to now do not exceed a virtual modelling of real-world scenarios. In particular, this paper shows that Second Life can be more than just another learning platform. A flexible and bidirectional link between the reality and the virtual world enables synchronous and seamless interaction between users and devices across both worlds. The primary advantages of this interconnection are a spatial extension of face-to-face and online learning scenarios and a closer relationship between virtual learners and the real world.

  14. Mutating the realities in fashion design: virtual clothing for 3D avatars

    OpenAIRE

    Taylor, Andrew; Unver, Ertu

    2007-01-01

    “My fantasy is to be Uma Thurman in Kill Bill…and now I can… I’d pay $10 for her yellow jumpsuit and sword moves and I’m sure other people would too… \\ud Hundreds and thousands of humans living in different time zones around the world are choosing to re-create and express themselves as three dimensional avatars in 3D virtual online worlds: An avatar is defined as an interactive 3D image or character, representing a user in a multi-user virtual world/virtual reality space. 3D virtual online wo...

  15. 3D virtual table in anatomy education

    DEFF Research Database (Denmark)

    Dahl, Mads Ronald; Simonsen, Eivind Ortind

    The ‘Anatomage’ is a 3D virtual human anatomy table, with touchscreen functionality, where it is possible to upload CT-scans and digital. Learning the human anatomy terminology requires time, a very good memory, anatomy atlas, books and lectures. Learning the 3 dimensional structure, connections...

  16. An active robot vision system for real-time 3-D structure recovery

    Energy Technology Data Exchange (ETDEWEB)

    Juvin, D. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. d`Electronique et d`Instrumentation Nucleaire; Boukir, S.; Chaumette, F.; Bouthemy, P. [Rennes-1 Univ., 35 (France)

    1993-10-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up.

  17. An active robot vision system for real-time 3-D structure recovery

    International Nuclear Information System (INIS)

    Juvin, D.

    1993-01-01

    This paper presents an active approach for the task of computing the 3-D structure of a nuclear plant environment from an image sequence, more precisely the recovery of the 3-D structure of cylindrical objects. Active vision is considered by computing adequate camera motions using image-based control laws. This approach requires a real-time tracking of the limbs of the cylinders. Therefore, an original matching approach, which relies on an algorithm for determining moving edges, is proposed. This method is distinguished by its robustness and its easiness to implement. This method has been implemented on a parallel image processing board and real-time performance has been achieved. The whole scheme has been successfully validated in an experimental set-up

  18. Realistic 3D Terrain Roaming and Real-Time Flight Simulation

    Science.gov (United States)

    Que, Xiang; Liu, Gang; He, Zhenwen; Qi, Guang

    2014-12-01

    This paper presents an integrate method, which can provide access to current status and the dynamic visible scanning topography, to enhance the interactive during the terrain roaming and real-time flight simulation. A digital elevation model and digital ortho-photo map data integrated algorithm is proposed as the base algorithm for our approach to build a realistic 3D terrain scene. A new technique with help of render to texture and head of display for generating the navigation pane is used. In the flight simulating, in order to eliminate flying "jump", we employs the multidimensional linear interpolation method to adjust the camera parameters dynamically and steadily. Meanwhile, based on the principle of scanning laser imaging, we draw pseudo color figures by scanning topography in different directions according to the real-time flying status. Simulation results demonstrate that the proposed algorithm is prospective for applications and the method can improve the effect and enhance dynamic interaction during the real-time flight.

  19. Real time 3D structural and Doppler OCT imaging on graphics processing units

    Science.gov (United States)

    Sylwestrzak, Marcin; Szlag, Daniel; Szkulmowski, Maciej; Gorczyńska, Iwona; Bukowska, Danuta; Wojtkowski, Maciej; Targowski, Piotr

    2013-03-01

    In this report the application of graphics processing unit (GPU) programming for real-time 3D Fourier domain Optical Coherence Tomography (FdOCT) imaging with implementation of Doppler algorithms for visualization of the flows in capillary vessels is presented. Generally, the time of the data processing of the FdOCT data on the main processor of the computer (CPU) constitute a main limitation for real-time imaging. Employing additional algorithms, such as Doppler OCT analysis, makes this processing even more time consuming. Lately developed GPUs, which offers a very high computational power, give a solution to this problem. Taking advantages of them for massively parallel data processing, allow for real-time imaging in FdOCT. The presented software for structural and Doppler OCT allow for the whole processing with visualization of 2D data consisting of 2000 A-scans generated from 2048 pixels spectra with frame rate about 120 fps. The 3D imaging in the same mode of the volume data build of 220 × 100 A-scans is performed at a rate of about 8 frames per second. In this paper a software architecture, organization of the threads and optimization applied is shown. For illustration the screen shots recorded during real time imaging of the phantom (homogeneous water solution of Intralipid in glass capillary) and the human eye in-vivo is presented.

  20. PENGGUNAAN AUGMENTED REALITY UNTUK MENSIMULASIKAN DEKORASI RUANGAN SECARA REAL TIME

    Directory of Open Access Journals (Sweden)

    Ulva Erida Nur Rochmah

    2016-12-01

    Full Text Available Abstrak Mendekorasi ruangan merupakan kegiatan  yang memakan banyak waktu dan tenaga, terutama jika objek yang digunakan besar dan berat. Akan merepotkan jika seseorang harus menggeser setiap objek untuk menentukan letak yang sesuai. Hal ini dapat dihindari dengan menggunakan aplikasi berbasis Augmented Reality. Augmented Reality merupakan sebuah teknologi untuk menggabungkan dunia nyata dan dunia virtual dengan cara menampilkan objek-objek virtual di dunia nyata secara real time. Tujuan utama dari penelitian ini adalah untuk menciptakan suatu aplikasi Android dengan menggunakan Augmented Reality yang dapat digunakan untuk menggantikan objek nyata dengan objek virtual 3D sehingga memudahkan pengguna dalam mensimulasikan dekorasi ruangan. Aplikasi ini bekerja dengan cara memindai marker yang sudah dicetak pada selembar kertas. Jika marker telah terdeteksi, objek 3D berupa perabot akan ditampilkan di kamera aplikasi. Pengguna dapat memindah lokasi, memutar, dan mengubah ukuran objek untuk mensimulasikan tata letak ruangan. Hasil penelitian ini adalah aplikasi Android berbasis Augmented Reality yang dapat digunakan untuk mensimulasikan dekorasi ruangan secara real time. Kata Kunci: augmented reality, dekorasi, interior, marker Abstract Decorating the room is an activity that takes a lot of time and effort, especially if the object is large and heavy. It would be inconvenient if someone has to drag each object to determine the appropriate location. This problem can be avoided using an Augmented Reality-based application. Augmented reality is a technology that combine the real world and the virtual world by displaying virtual object in real world. The main purpose of this research is to create an Android application using Augmented reality  that can be used to replace the real object with virtual 3D object, making it easier for the user to simulate the decoration of the room. This application works by scanning the marker that has been printed on a

  1. Virtual reality myringotomy simulation with real-time deformation: development and validity testing.

    Science.gov (United States)

    Ho, Andrew K; Alsaffar, Hussain; Doyle, Philip C; Ladak, Hanif M; Agrawal, Sumit K

    2012-08-01

    Surgical simulation is becoming an increasingly common training tool in residency programs. The first objective was to implement real-time soft-tissue deformation and cutting into a virtual reality myringotomy simulator. The second objective was to test the various implemented incision algorithms to determine which most accurately represents the tympanic membrane during myringotomy. Descriptive and face-validity testing. A deformable tympanic membrane was developed, and three soft-tissue cutting algorithms were successfully implemented into the virtual reality myringotomy simulator. The algorithms included element removal, direction prediction, and Delaunay cutting. The simulator was stable and capable of running in real time on inexpensive hardware. A face-validity study was then carried out using a validated questionnaire given to eight otolaryngologists and four senior otolaryngology residents. Each participant was given an adaptation period on the simulator, was blinded to the algorithm being used, and was presented the three algorithms in a randomized order. A virtual reality myringotomy simulator with real-time soft-tissue deformation and cutting was successfully developed. The simulator was stable, ran in real time on inexpensive hardware, and incorporated haptic feedback and stereoscopic vision. The Delaunay cutting algorithm was found to be the most realistic algorithm representing the incision during myringotomy (P virtual reality myringotomy simulator is being developed and now integrates a real-time deformable tympanic membrane that appears to have face validity. Further development and validation studies are necessary before the simulator can be studied with respect to training efficacy and clinical impact. Copyright © 2012 The American Laryngological, Rhinological, and Otological Society, Inc.

  2. Multithreaded real-time 3D image processing software architecture and implementation

    Science.gov (United States)

    Ramachandra, Vikas; Atanassov, Kalin; Aleksic, Milivoje; Goma, Sergio R.

    2011-03-01

    Recently, 3D displays and videos have generated a lot of interest in the consumer electronics industry. To make 3D capture and playback popular and practical, a user friendly playback interface is desirable. Towards this end, we built a real time software 3D video player. The 3D video player displays user captured 3D videos, provides for various 3D specific image processing functions and ensures a pleasant viewing experience. Moreover, the player enables user interactivity by providing digital zoom and pan functionalities. This real time 3D player was implemented on the GPU using CUDA and OpenGL. The player provides user interactive 3D video playback. Stereo images are first read by the player from a fast drive and rectified. Further processing of the images determines the optimal convergence point in the 3D scene to reduce eye strain. The rationale for this convergence point selection takes into account scene depth and display geometry. The first step in this processing chain is identifying keypoints by detecting vertical edges within the left image. Regions surrounding reliable keypoints are then located on the right image through the use of block matching. The difference in the positions between the corresponding regions in the left and right images are then used to calculate disparity. The extrema of the disparity histogram gives the scene disparity range. The left and right images are shifted based upon the calculated range, in order to place the desired region of the 3D scene at convergence. All the above computations are performed on one CPU thread which calls CUDA functions. Image upsampling and shifting is performed in response to user zoom and pan. The player also consists of a CPU display thread, which uses OpenGL rendering (quad buffers). This also gathers user input for digital zoom and pan and sends them to the processing thread.

  3. Real-Time 3d Reconstruction from Images Taken from AN Uav

    Science.gov (United States)

    Zingoni, A.; Diani, M.; Corsini, G.; Masini, A.

    2015-08-01

    We designed a method for creating 3D models of objects and areas from two aerial images acquired from an UAV. The models are generated automatically and in real-time, and consist in dense and true-colour reconstructions of the considered areas, which give the impression to the operator to be physically present within the scene. The proposed method only needs a cheap compact camera, mounted on a small UAV. No additional instrumentation is necessary, so that the costs are very limited. The method consists of two main parts: the design of the acquisition system and the 3D reconstruction algorithm. In the first part, the choices for the acquisition geometry and for the camera parameters are optimized, in order to yield the best performance. In the second part, a reconstruction algorithm extracts the 3D model from the two acquired images, maximizing the accuracy under the real-time constraint. A test was performed in monitoring a construction yard, obtaining very promising results. Highly realistic and easy-to-interpret 3D models of objects and areas of interest were produced in less than one second, with an accuracy of about 0.5m. For its characteristics, the designed method is suitable for video-surveillance, remote sensing and monitoring, especially in those applications that require intuitive and reliable information quickly, as disasters monitoring, search and rescue and area surveillance.

  4. NEDE: an open-source scripting suite for developing experiments in 3D virtual environments.

    Science.gov (United States)

    Jangraw, David C; Johri, Ansh; Gribetz, Meron; Sajda, Paul

    2014-09-30

    As neuroscientists endeavor to understand the brain's response to ecologically valid scenarios, many are leaving behind hyper-controlled paradigms in favor of more realistic ones. This movement has made the use of 3D rendering software an increasingly compelling option. However, mastering such software and scripting rigorous experiments requires a daunting amount of time and effort. To reduce these startup costs and make virtual environment studies more accessible to researchers, we demonstrate a naturalistic experimental design environment (NEDE) that allows experimenters to present realistic virtual stimuli while still providing tight control over the subject's experience. NEDE is a suite of open-source scripts built on the widely used Unity3D game development software, giving experimenters access to powerful rendering tools while interfacing with eye tracking and EEG, randomizing stimuli, and providing custom task prompts. Researchers using NEDE can present a dynamic 3D virtual environment in which randomized stimulus objects can be placed, allowing subjects to explore in search of these objects. NEDE interfaces with a research-grade eye tracker in real-time to maintain precise timing records and sync with EEG or other recording modalities. Python offers an alternative for experienced programmers who feel comfortable mastering and integrating the various toolboxes available. NEDE combines many of these capabilities with an easy-to-use interface and, through Unity's extensive user base, a much more substantial body of assets and tutorials. Our flexible, open-source experimental design system lowers the barrier to entry for neuroscientists interested in developing experiments in realistic virtual environments. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. The Idaho Virtualization Laboratory 3D Pipeline

    Directory of Open Access Journals (Sweden)

    Nicholas A. Holmer

    2014-05-01

    Full Text Available Three dimensional (3D virtualization and visualization is an important component of industry, art, museum curation and cultural heritage, yet the step by step process of 3D virtualization has been little discussed. Here we review the Idaho Virtualization Laboratory’s (IVL process of virtualizing a cultural heritage item (artifact from start to finish. Each step is thoroughly explained and illustrated including how the object and its metadata are digitally preserved and ultimately distributed to the world.

  6. Strain measurement of abdominal aortic aneurysm with real-time 3D ultrasound speckle tracking.

    Science.gov (United States)

    Bihari, P; Shelke, A; Nwe, T H; Mularczyk, M; Nelson, K; Schmandra, T; Knez, P; Schmitz-Rixen, T

    2013-04-01

    Abdominal aortic aneurysm rupture is caused by mechanical vascular tissue failure. Although mechanical properties within the aneurysm vary, currently available ultrasound methods assess only one cross-sectional segment of the aorta. This study aims to establish real-time 3-dimensional (3D) speckle tracking ultrasound to explore local displacement and strain parameters of the whole abdominal aortic aneurysm. Validation was performed on a silicone aneurysm model, perfused in a pulsatile artificial circulatory system. Wall motion of the silicone model was measured simultaneously with a commercial real-time 3D speckle tracking ultrasound system and either with laser-scan micrometry or with video photogrammetry. After validation, 3D ultrasound data were collected from abdominal aortic aneurysms of five patients and displacement and strain parameters were analysed. Displacement parameters measured in vitro by 3D ultrasound and laser scan micrometer or video analysis were significantly correlated at pulse pressures between 40 and 80 mmHg. Strong local differences in displacement and strain were identified within the aortic aneurysms of patients. Local wall strain of the whole abdominal aortic aneurysm can be analysed in vivo with real-time 3D ultrasound speckle tracking imaging, offering the prospect of individual non-invasive rupture risk analysis of abdominal aortic aneurysms. Copyright © 2013 European Society for Vascular Surgery. Published by Elsevier Ltd. All rights reserved.

  7. 3DPublish: solución web para crear museos virtuales 3D dinámicos

    Directory of Open Access Journals (Sweden)

    P. Aguirrezabal

    2012-11-01

    aplicación 3DPublish que representa una alternativa a estas 2 soluciones estáticas ya que ofrece la posibilidad de gestionar dinámicamente un escenario 3D (realvirtual y las obras de arte que componen la exposición. 3DPublish proporciona además al usuario una experiencia realista a través de diferentes exposiciones, usando métodos que añaden valor como los visitas virtuales guiadas o la técnica de storytelling. 3DPublish facilitará las tareas diarias de los comisarios de los museos y mejorará el resultado final de las exposiciones de los muesos virtuales en 3D. Este artículo presentará también el caso de aplicación de la Sala Kubo en San Sebastián (ESPAÑA como ejemplo de un caso de uso de 3DPublish.

  8. An Overview on Base Real-Time Hard Shadow Techniques in Virtual Environments

    Directory of Open Access Journals (Sweden)

    Mohd Shahrizal Sunar

    2012-03-01

    Full Text Available Shadows are elegant to create a realistic scene in virtual environments variety type of shadow techniques encourage us to prepare an overview on all base shadow techniques. Non real-time and real-time techniques are big subdivision of shadow generation. In non real-time techniques ray tracing, ray casting and radiosity are well known and are described deeply. Radiosity is implemented to create very realistic shadow on non real-time scene. Although traditional radiosity algorithm is difficult to implement, we have proposed a simple one. The proposed pseudo code is easier to understand and implement. Ray tracing is used to prevent of collision of movement objects. Projection shadow, shadow volume and shadow mapping are used to create real-time shadow in virtual environments. We have used projection shadow for some objects are static and have shadow on flat surface. Shadow volume is used to create accurate shadow with sharp outline. Shadow mapping that is the base of most recently techniques is reconstructed. The reconstruct algorithm gives some new idea to propose another algorithm based on shadow mapping.

  9. 3D Display of Spacecraft Dynamics Using Real Telemetry

    Directory of Open Access Journals (Sweden)

    Sanguk Lee

    2002-12-01

    Full Text Available 3D display of spacecraft motion by using telemetry data received from satellite in real-time is described. Telemetry data are converted to the appropriate form for 3-D display by the real-time preprocessor. Stored playback telemetry data also can be processed for the display. 3D display of spacecraft motion by using real telemetry data provides intuitive comprehension of spacecraft dynamics.

  10. Network Dynamics with BrainX3: A Large-Scale Simulation of the Human Brain Network with Real-Time Interaction

    OpenAIRE

    Xerxes D. Arsiwalla; Riccardo eZucca; Alberto eBetella; Enrique eMartinez; David eDalmazzo; Pedro eOmedas; Gustavo eDeco; Gustavo eDeco; Paul F.M.J. Verschure; Paul F.M.J. Verschure

    2015-01-01

    BrainX3 is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX3 in real-time by perturbing brain regions with transient stimula...

  11. Network dynamics with BrainX3: a large-scale simulation of the human brain network with real-time interaction

    OpenAIRE

    Arsiwalla, Xerxes D.; Zucca, Riccardo; Betella, Alberto; Martínez, Enrique, 1961-; Dalmazzo, David; Omedas, Pedro; Deco, Gustavo; Verschure, Paul F. M. J.

    2015-01-01

    BrainX3 is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX3 in real-time by perturbing brain regions with transient stimula...

  12. Combining 3D structure of real video and synthetic objects

    Science.gov (United States)

    Kim, Man-Bae; Song, Mun-Sup; Kim, Do-Kyoon

    1998-04-01

    This paper presents a new approach of combining real video and synthetic objects. The purpose of this work is to use the proposed technology in the fields of advanced animation, virtual reality, games, and so forth. Computer graphics has been used in the fields previously mentioned. Recently, some applications have added real video to graphic scenes for the purpose of augmenting the realism that the computer graphics lacks in. This approach called augmented or mixed reality can produce more realistic environment that the entire use of computer graphics. Our approach differs from the virtual reality and augmented reality in the manner that computer- generated graphic objects are combined to 3D structure extracted from monocular image sequences. The extraction of the 3D structure requires the estimation of 3D depth followed by the construction of a height map. Graphic objects are then combined to the height map. The realization of our proposed approach is carried out in the following steps: (1) We derive 3D structure from test image sequences. The extraction of the 3D structure requires the estimation of depth and the construction of a height map. Due to the contents of the test sequence, the height map represents the 3D structure. (2) The height map is modeled by Delaunay triangulation or Bezier surface and each planar surface is texture-mapped. (3) Finally, graphic objects are combined to the height map. Because 3D structure of the height map is already known, Step (3) is easily manipulated. Following this procedure, we produced an animation video demonstrating the combination of the 3D structure and graphic models. Users can navigate the realistic 3D world whose associated image is rendered on the display monitor.

  13. Development and application of visual support module for remote operator in 3D virtual environment

    International Nuclear Information System (INIS)

    Choi, Kyung Hyun; Cho, Soo Jeong; Yang, Kyung Boo; Bae, Chang Hyun

    2006-02-01

    In this research, the 3D graphic environment was developed for remote operation, and included the visual support module. The real operation environment was built by employing a experiment robot, and also the identical virtual model was developed. The well-designed virtual models can be used to retrieve the necessary conditions for developing the devices and processes. The integration of 3D virtual models, the experimental operation environment, and the visual support module was used for evaluating the operation efficiency and accuracy by applying different methods such as only monitor image and with visual support module

  14. Development and application of visual support module for remote operator in 3D virtual environment

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Kyung Hyun; Cho, Soo Jeong; Yang, Kyung Boo [Cheju Nat. Univ., Jeju (Korea, Republic of); Bae, Chang Hyun [Pusan Nat. Univ., Busan (Korea, Republic of)

    2006-02-15

    In this research, the 3D graphic environment was developed for remote operation, and included the visual support module. The real operation environment was built by employing a experiment robot, and also the identical virtual model was developed. The well-designed virtual models can be used to retrieve the necessary conditions for developing the devices and processes. The integration of 3D virtual models, the experimental operation environment, and the visual support module was used for evaluating the operation efficiency and accuracy by applying different methods such as only monitor image and with visual support module.

  15. Real-time 3D radiation risk assessment supporting simulation of work in nuclear environments

    International Nuclear Information System (INIS)

    Szoke, I; Louka, M N; Bryntesen, T R; Bratteli, J; Edvardsen, S T; RøEitrheim, K K; Bodor, K

    2014-01-01

    This paper describes the latest developments at the Institute for Energy Technology (IFE) in Norway, in the field of real-time 3D (three-dimensional) radiation risk assessment for the support of work simulation in nuclear environments. 3D computer simulation can greatly facilitate efficient work planning, briefing, and training of workers. It can also support communication within and between work teams, and with advisors, regulators, the media and public, at all the stages of a nuclear installation’s lifecycle. Furthermore, it is also a beneficial tool for reviewing current work practices in order to identify possible gaps in procedures, as well as to support the updating of international recommendations, dissemination of experience, and education of the current and future generation of workers. IFE has been involved in research and development into the application of 3D computer simulation and virtual reality (VR) technology to support work in radiological environments in the nuclear sector since the mid 1990s. During this process, two significant software tools have been developed, the VRdose system and the Halden Planner, and a number of publications have been produced to contribute to improving the safety culture in the nuclear industry. This paper describes the radiation risk assessment techniques applied in earlier versions of the VRdose system and the Halden Planner, for visualising radiation fields and calculating dose, and presents new developments towards implementing a flexible and up-to-date dosimetric package in these 3D software tools, based on new developments in the field of radiation protection. The latest versions of these 3D tools are capable of more accurate risk estimation, permit more flexibility via a range of user choices, and are applicable to a wider range of irradiation situations than their predecessors. (paper)

  16. Real-time 3D radiation risk assessment supporting simulation of work in nuclear environments.

    Science.gov (United States)

    Szőke, I; Louka, M N; Bryntesen, T R; Bratteli, J; Edvardsen, S T; RøEitrheim, K K; Bodor, K

    2014-06-01

    This paper describes the latest developments at the Institute for Energy Technology (IFE) in Norway, in the field of real-time 3D (three-dimensional) radiation risk assessment for the support of work simulation in nuclear environments. 3D computer simulation can greatly facilitate efficient work planning, briefing, and training of workers. It can also support communication within and between work teams, and with advisors, regulators, the media and public, at all the stages of a nuclear installation's lifecycle. Furthermore, it is also a beneficial tool for reviewing current work practices in order to identify possible gaps in procedures, as well as to support the updating of international recommendations, dissemination of experience, and education of the current and future generation of workers.IFE has been involved in research and development into the application of 3D computer simulation and virtual reality (VR) technology to support work in radiological environments in the nuclear sector since the mid 1990s. During this process, two significant software tools have been developed, the VRdose system and the Halden Planner, and a number of publications have been produced to contribute to improving the safety culture in the nuclear industry.This paper describes the radiation risk assessment techniques applied in earlier versions of the VRdose system and the Halden Planner, for visualising radiation fields and calculating dose, and presents new developments towards implementing a flexible and up-to-date dosimetric package in these 3D software tools, based on new developments in the field of radiation protection. The latest versions of these 3D tools are capable of more accurate risk estimation, permit more flexibility via a range of user choices, and are applicable to a wider range of irradiation situations than their predecessors.

  17. Mobile viewer system for virtual 3D space using infrared LED point markers and camera

    Science.gov (United States)

    Sakamoto, Kunio; Taneji, Shoto

    2006-09-01

    The authors have developed a 3D workspace system using collaborative imaging devices. A stereoscopic display enables this system to project 3D information. In this paper, we describe the position detecting system for a see-through 3D viewer. A 3D display system is useful technology for virtual reality, mixed reality and augmented reality. We have researched spatial imaging and interaction system. We have ever proposed 3D displays using the slit as a parallax barrier, the lenticular screen and the holographic optical elements(HOEs) for displaying active image 1)2)3)4). The purpose of this paper is to propose the interactive system using these 3D imaging technologies. The observer can view virtual images in the real world when the user watches the screen of a see-through 3D viewer. The goal of our research is to build the display system as follows; when users see the real world through the mobile viewer, the display system gives users virtual 3D images, which is floating in the air, and the observers can touch these floating images and interact them such that kids can make play clay. The key technologies of this system are the position recognition system and the spatial imaging display. The 3D images are presented by the improved parallax barrier 3D display. Here the authors discuss the measuring method of the mobile viewer using infrared LED point markers and a camera in the 3D workspace (augmented reality world). The authors show the geometric analysis of the proposed measuring method, which is the simplest method using a single camera not the stereo camera, and the results of our viewer system.

  18. Microwave ablation assisted by a real-time virtual navigation system for hepatocellular carcinoma undetectable by conventional ultrasonography

    International Nuclear Information System (INIS)

    Liu Fangyi; Yu Xiaoling; Liang Ping; Cheng Zhigang; Han Zhiyu; Dong Baowei; Zhang Xiaohong

    2012-01-01

    Objectives: To evaluate the efficiency and feasibility of microwave (MW) ablation assisted by a real-time virtual navigation system for hepatocellular carcinoma (HCC) undetectable by conventional ultrasonography. Methods: 18 patients with 18 HCC nodules (undetectable on conventional US but detectable by intravenous contrast-enhanced CT or MRI) were enrolled in this study. Before MW ablation, US images and MRI or CT images were synchronized using the internal markers at the best timing of the inspiration. Thereafter, MW ablation was performed under real-time virtual navigation system guidance. Therapeutic efficacy was assessed by the result of contrast-enhanced imagings after the treatment. Results: The target HCC nodules could be detected with fusion images in all patients. The time required for image fusion was 8–30 min (mean, 13.3 ± 5.7 min). 17 nodules were successfully ablated according to the contrast enhanced imagings 1 month after ablation. The technique effectiveness rate was 94.44% (17/18). The follow-up time was 3–12 months (median, 6 months) in our study. No severe complications occurred. No local recurrence was observed in any patients. Conclusions: MW ablation assisted by a real-time virtual navigation system is a feasible and efficient treatment of patients with HCC undetectable by conventional ultrasonography.

  19. Near-real time 3D probabilistic earthquakes locations at Mt. Etna volcano

    Science.gov (United States)

    Barberi, G.; D'Agostino, M.; Mostaccio, A.; Patane', D.; Tuve', T.

    2012-04-01

    Automatic procedure for locating earthquake in quasi-real time must provide a good estimation of earthquakes location within a few seconds after the event is first detected and is strongly needed for seismic warning system. The reliability of an automatic location algorithm is influenced by several factors such as errors in picking seismic phases, network geometry, and velocity model uncertainties. On Mt. Etna, the seismic network is managed by INGV and the quasi-real time earthquakes locations are performed by using an automatic-picking algorithm based on short-term-average to long-term-average ratios (STA/LTA) calculated from an approximate squared envelope function of the seismogram, which furnish a list of P-wave arrival times, and the location algorithm Hypoellipse, with a 1D velocity model. The main purpose of this work is to investigate the performances of a different automatic procedure to improve the quasi-real time earthquakes locations. In fact, as the automatic data processing may be affected by outliers (wrong picks), the use of a traditional earthquake location techniques based on a least-square misfit function (L2-norm) often yield unstable and unreliable solutions. Moreover, on Mt. Etna, the 1D model is often unable to represent the complex structure of the volcano (in particular the strong lateral heterogeneities), whereas the increasing accuracy in the 3D velocity models at Mt. Etna during recent years allows their use today in routine earthquake locations. Therefore, we selected, as reference locations, all the events occurred on Mt. Etna in the last year (2011) which was automatically detected and located by means of the Hypoellipse code. By using this dataset (more than 300 events), we applied a nonlinear probabilistic earthquake location algorithm using the Equal Differential Time (EDT) likelihood function, (Font et al., 2004; Lomax, 2005) which is much more robust in the presence of outliers in the data. Successively, by using a probabilistic

  20. Demonstration of a real-time implementation of the ICVision holographic stereogram display

    Science.gov (United States)

    Kulick, Jeffrey H.; Jones, Michael W.; Nordin, Gregory P.; Lindquist, Robert G.; Kowel, Stephen T.; Thomsen, Axel

    1995-07-01

    There is increasing interest in real-time autostereoscopic 3D displays. Such systems allow 3D objects or scenes to be viewed by one or more observers with correct motion parallax without the need for glasses or other viewing aids. Potential applications of such systems include mechanical design, training and simulation, medical imaging, virtual reality, and architectural design. One approach to the development of real-time autostereoscopic display systems has been to develop real-time holographic display systems. The approach taken by most of the systems is to compute and display a number of holographic lines at one time, and then use a scanning system to replicate the images throughout the display region. The approach taken in the ICVision system being developed at the University of Alabama in Huntsville is very different. In the ICVision display, a set of discrete viewing regions called virtual viewing slits are created by the display. Each pixel is required fill every viewing slit with different image data. When the images presented in two virtual viewing slits separated by an interoccular distance are filled with stereoscopic pair images, the observer sees a 3D image. The images are computed so that a different stereo pair is presented each time the viewer moves 1 eye pupil diameter (approximately mm), thus providing a series of stereo views. Each pixel is subdivided into smaller regions, called partial pixels. Each partial pixel is filled with a diffraction grating that is just that required to fill an individual virtual viewing slit. The sum of all the partial pixels in a pixel then fill all the virtual viewing slits. The final version of the ICVision system will form diffraction gratings in a liquid crystal layer on the surface of VLSI chips in real time. Processors embedded in the VLSI chips will compute the display in real- time. In the current version of the system, a commercial AMLCD is sandwiched with a diffraction grating array. This paper will discuss

  1. Universidades virtuales: ¿aprendizaje real?

    OpenAIRE

    Valenzuela González, Jaime R.

    2015-01-01

    1. Introducción; 2. Modalidades educativas; 3. Concepto y características de las universidades virtuales; 4. Un modelo de universidad virtual; 5. Universidades virtuales ¿aprendizaje real?; 6. Tensiones de las universidades virtuales; 7. Referencias

  2. "Eyes On The Solar System": A Real-time, 3D-Interactive Tool to Teach the Wonder of Planetary Science

    Science.gov (United States)

    Hussey, K. J.

    2011-10-01

    NASA's Jet Propulsion Laboratory is using videogame technology to immerse students, the general public and mission personnel in our solar system and beyond. "Eyes on the Solar System," a cross-platform, real-time, 3D-interactive application that runs inside a Web browser, was released worldwide late last year (solarsystem.nasa.gov/eyes). It gives users an extraordinary view of our solar system by virtually transporting them across space and time to make first-person observations of spacecraft and NASA/ESA missions in action. Key scientific results illustrated with video presentations and supporting imagery are imbedded contextually into the solar system. The presentation will include a detailed demonstration of the software along with a description/discussion of how this technology can be adapted for education and public outreach, as well as a preview of coming attractions. This work is being conducted by the Visualization Technology Applications and Development Group at NASA's Jet Propulsion Laboratory, the same team responsible for "Eyes on the Earth 3D," which can be viewed at climate.nasa.gov/Eyes.html.

  3. Feasibility of the integration of CRONOS, a 3-D neutronics code, into real-time simulators

    International Nuclear Information System (INIS)

    Ragusa, J.C.

    2001-01-01

    In its effort to contribute to nuclear power plant safety, CEA proposes the integration of an engineering grade 3-D neutronics code into a real-time plant analyser. This paper describes the capabilities of the neutronics code CRONOS to achieve a fast running performance. First, we will present current core models in simulators and explain their drawbacks. Secondly, the mean features of CRONOS's spatial-kinetics methods will be reviewed. We will then present an optimum core representation with respect to mesh size, choice of finite elements (FE) basis and execution time, for accurate results as well as the multi 1-D thermal-hydraulics (T/H) model developed to take into account 3-D effects in updating the cross-sections. A Main Steam Line Break (MSLB) End-of-Life (EOL) Hot-Zero-Power (HZP) accident will be used as an example, before we conclude with the perspectives of integrating CRONOS's 3-D core model into real-time simulators. (author)

  4. Feasibility of the integration of CRONOS, a 3-D neutronics code, into real-time simulators

    Energy Technology Data Exchange (ETDEWEB)

    Ragusa, J.C. [CEA Saclay, Dept. de Mecanique et de Technologie, 91 - Gif-sur-Yvette (France)

    2001-07-01

    In its effort to contribute to nuclear power plant safety, CEA proposes the integration of an engineering grade 3-D neutronics code into a real-time plant analyser. This paper describes the capabilities of the neutronics code CRONOS to achieve a fast running performance. First, we will present current core models in simulators and explain their drawbacks. Secondly, the mean features of CRONOS's spatial-kinetics methods will be reviewed. We will then present an optimum core representation with respect to mesh size, choice of finite elements (FE) basis and execution time, for accurate results as well as the multi 1-D thermal-hydraulics (T/H) model developed to take into account 3-D effects in updating the cross-sections. A Main Steam Line Break (MSLB) End-of-Life (EOL) Hot-Zero-Power (HZP) accident will be used as an example, before we conclude with the perspectives of integrating CRONOS's 3-D core model into real-time simulators. (author)

  5. Virtual VMASC: A 3D Game Environment

    Science.gov (United States)

    Manepalli, Suchitra; Shen, Yuzhong; Garcia, Hector M.; Lawsure, Kaleen

    2010-01-01

    The advantages of creating interactive 3D simulations that allow viewing, exploring, and interacting with land improvements, such as buildings, in digital form are manifold and range from allowing individuals from anywhere in the world to explore those virtual land improvements online, to training military personnel in dealing with war-time environments, and to making those land improvements available in virtual worlds such as Second Life. While we haven't fully explored the true potential of such simulations, we have identified a requirement within our organization to use simulations like those to replace our front-desk personnel and allow visitors to query, naVigate, and communicate virtually with various entities within the building. We implemented the Virtual VMASC 3D simulation of the Virginia Modeling Analysis and Simulation Center (VMASC) office building to not only meet our front-desk requirement but also to evaluate the effort required in designing such a simulation and, thereby, leverage the experience we gained in future projects of this kind. This paper describes the goals we set for our implementation, the software approach taken, the modeling contribution made, and the technologies used such as XNA Game Studio, .NET framework, Autodesk software packages, and, finally, the applicability of our implementation on a variety of architectures including Xbox 360 and PC. This paper also summarizes the result of our evaluation and the lessons learned from our effort.

  6. QuickPALM: 3D real-time photoactivation nanoscopy image processing in ImageJ

    CSIR Research Space (South Africa)

    Henriques, R

    2010-05-01

    Full Text Available QuickPALM in conjunction with the acquisition of control features provides a complete solution for the acquisition, reconstruction and visualization of 3D PALM or STORM images, achieving resolutions of ~40 nm in real time. This software package...

  7. A real-time 3D end-to-end augmented reality system (and its representation transformations)

    Science.gov (United States)

    Tytgat, Donny; Aerts, Maarten; De Busser, Jeroen; Lievens, Sammy; Rondao Alface, Patrice; Macq, Jean-Francois

    2016-09-01

    The new generation of HMDs coming to the market is expected to enable many new applications that allow free viewpoint experiences with captured video objects. Current applications usually rely on 3D content that is manually created or captured in an offline manner. In contrast, this paper focuses on augmented reality applications that use live captured 3D objects while maintaining free viewpoint interaction. We present a system that allows live dynamic 3D objects (e.g. a person who is talking) to be captured in real-time. Real-time performance is achieved by traversing a number of representation formats and exploiting their specific benefits. For instance, depth images are maintained for fast neighborhood retrieval and occlusion determination, while implicit surfaces are used to facilitate multi-source aggregation for both geometry and texture. The result is a 3D reconstruction system that outputs multi-textured triangle meshes at real-time rates. An end-to-end system is presented that captures and reconstructs live 3D data and allows for this data to be used on a networked (AR) device. For allocating the different functional blocks onto the available physical devices, a number of alternatives are proposed considering the available computational power and bandwidth for each of the components. As we will show, the representation format can play an important role in this functional allocation and allows for a flexible system that can support a highly heterogeneous infrastructure.

  8. The Photogrammetric Survey Methodologies Applied to Low Cost 3d Virtual Exploration in Multidisciplinary Field

    Science.gov (United States)

    Palestini, C.; Basso, A.

    2017-11-01

    In recent years, an increase in international investment in hardware and software technology to support programs that adopt algorithms for photomodeling or data management from laser scanners significantly reduced the costs of operations in support of Augmented Reality and Virtual Reality, designed to generate real-time explorable digital environments integrated to virtual stereoscopic headset. The research analyzes transversal methodologies related to the acquisition of these technologies in order to intervene directly on the phenomenon of acquiring the current VR tools within a specific workflow, in light of any issues related to the intensive use of such devices , outlining a quick overview of the possible "virtual migration" phenomenon, assuming a possible integration with the new internet hyper-speed systems, capable of triggering a massive cyberspace colonization process that paradoxically would also affect the everyday life and more in general, on human space perception. The contribution aims at analyzing the application systems used for low cost 3d photogrammetry by means of a precise pipeline, clarifying how a 3d model is generated, automatically retopologized, textured by color painting or photo-cloning techniques, and optimized for parametric insertion on virtual exploration platforms. Workflow analysis will follow some case studies related to photomodeling, digital retopology and "virtual 3d transfer" of some small archaeological artifacts and an architectural compartment corresponding to the pronaus of Aurum, a building designed in the 1940s by Michelucci. All operations will be conducted on cheap or free licensed software that today offer almost the same performance as their paid counterparts, progressively improving in the data processing speed and management.

  9. 2D array transducers for real-time 3D ultrasound guidance of interventional devices

    Science.gov (United States)

    Light, Edward D.; Smith, Stephen W.

    2009-02-01

    We describe catheter ring arrays for real-time 3D ultrasound guidance of devices such as vascular grafts, heart valves and vena cava filters. We have constructed several prototypes operating at 5 MHz and consisting of 54 elements using the W.L. Gore & Associates, Inc. micro-miniature ribbon cables. We have recently constructed a new transducer using a braided wiring technology from Precision Interconnect. This transducer consists of 54 elements at 4.8 MHz with pitch of 0.20 mm and typical -6 dB bandwidth of 22%. In all cases, the transducer and wiring assembly were integrated with an 11 French catheter of a Cook Medical deployment device for vena cava filters. Preliminary in vivo and in vitro testing is ongoing including simultaneous 3D ultrasound and x-ray fluoroscopy.

  10. A meshless EFG-based algorithm for 3D deformable modeling of soft tissue in real-time.

    Science.gov (United States)

    Abdi, Elahe; Farahmand, Farzam; Durali, Mohammad

    2012-01-01

    The meshless element-free Galerkin method was generalized and an algorithm was developed for 3D dynamic modeling of deformable bodies in real time. The efficacy of the algorithm was investigated in a 3D linear viscoelastic model of human spleen subjected to a time-varying compressive force exerted by a surgical grasper. The model remained stable in spite of the considerably large deformations occurred. There was a good agreement between the results and those of an equivalent finite element model. The computational cost, however, was much lower, enabling the proposed algorithm to be effectively used in real-time applications.

  11. Real-time quasi-3D tomographic reconstruction

    Science.gov (United States)

    Buurlage, Jan-Willem; Kohr, Holger; Palenstijn, Willem Jan; Joost Batenburg, K.

    2018-06-01

    Developments in acquisition technology and a growing need for time-resolved experiments pose great computational challenges in tomography. In addition, access to reconstructions in real time is a highly demanded feature but has so far been out of reach. We show that by exploiting the mathematical properties of filtered backprojection-type methods, having access to real-time reconstructions of arbitrarily oriented slices becomes feasible. Furthermore, we present , software for visualization and on-demand reconstruction of slices. A user of can interactively shift and rotate slices in a GUI, while the software updates the slice in real time. For certain use cases, the possibility to study arbitrarily oriented slices in real time directly from the measured data provides sufficient visual and quantitative insight. Two such applications are discussed in this article.

  12. Preoperative Planning Using 3D Reconstructions and Virtual Endoscopy for Location of the Frontal Sinus

    Directory of Open Access Journals (Sweden)

    Abreu, João Paulo Saraiva

    2011-01-01

    Full Text Available Introduction: Computed tomography (TC generated tridimensional (3D reconstructions allow the observation of cavities and anatomic structures of our body with detail. In our specialty there have been attempts to carry out virtual endoscopies and laryngoscopies. However, such application has been practically abandoned due to its complexity and need for computers with high power of graphic processing. Objective: To demonstrate the production of 3D reconstructions from CTs of patients in personal computers, with a free specific program and compare them to the surgery actual endoscopic images. Method: Prospective study in which the CTs proper files of 10 patients were reconstructed with the program Intage Realia, version 2009, 0, 0, 702 (KGT Inc., Japan. The reconstructions were carried out before the surgeries and a virtual endoscopy was made to assess the recess and frontal sinus region. After this study, the surgery was digitally performed and stored. The actual endoscopic images of the recess and frontal sinus region were compared to the virtual images. Results: The 3D reconstruction and virtual endoscopy were made in 10 patients submitted to the surgery. The virtual images had a large resemblance with the actual surgical images. Conclusion: With relatively simple tools and personal computer, we demonstrated the possibility to generate 3D reconstructions and virtual endoscopies. The preoperative knowledge of the frontal sinus natural draining path location may generate benefits during the performance of surgeries. However, more studies must be developed for the evaluation of the real roles of such 3D reconstructions and virtual endoscopies.

  13. 3D display system using monocular multiview displays

    Science.gov (United States)

    Sakamoto, Kunio; Saruta, Kazuki; Takeda, Kazutoki

    2002-05-01

    A 3D head mounted display (HMD) system is useful for constructing a virtual space. The authors have researched the virtual-reality systems connected with computer networks for real-time remote control and developed a low-priced real-time 3D display for building these systems. We developed a 3D HMD system using monocular multi-view displays. The 3D displaying technique of this monocular multi-view display is based on the concept of the super multi-view proposed by Kajiki at TAO (Telecommunications Advancement Organization of Japan) in 1996. Our 3D HMD has two monocular multi-view displays (used as a visual display unit) in order to display a picture to the left eye and the right eye. The left and right images are a pair of stereoscopic images for the left and right eyes, then stereoscopic 3D images are observed.

  14. 3D Virtual Dig: a 3D Application for Teaching Fieldwork in Archaeology

    Directory of Open Access Journals (Sweden)

    Paola Di Giuseppantonio Di Franco

    2012-12-01

    Full Text Available Archaeology is a material, embodied discipline; communicating this experience is critical to student success. In the context of lower-division archaeology courses, the present study examines the efficacy of 3D virtual and 2D archaeological representations of digs. This presentation aims to show a 3D application created to teach the archaeological excavation process to freshmen students. An archaeological environment was virtually re-created in 3D, and inserted in a virtual reality software application that allows users to work with the reconstructed excavation area. The software was tested in class for teaching the basics of archaeological fieldwork. The application interface is user-friendly and especially easy for 21st century students. The study employed a pre-survey, post-test, and post-survey design, used to understand the students' previous familiarity with archaeology, and test their awareness after the use of the application. Their level of knowledge was then compared with that of those students who had accessed written material only. This case-study demonstrates how a digital approach to laboratory work can positively affect student learning. Increased abilities to complete ill-defined problems (characteristic of the high-order thinking in the field, can, in fact, be demonstrated. 3D Virtual reconstruction serves, then, as an important bridge from traditional coursework to fieldwork.

  15. 3D real-time monitoring system for LHD plasma heating experiment

    International Nuclear Information System (INIS)

    Emoto, M.; Narlo, J.; Kaneko, O.; Komori, A.; Iima, M.; Yamaguchi, S.; Sudo, S.

    2001-01-01

    The JAVA-based real-time monitoring system has been in use at the National Institute for Fusion Science, Japan, since the end of March 1988 to maintain stable operations. This system utilizes JAVA technology to realize its platform-independent nature. The main programs are written as JAVA applets and provide human-friendly interfaces. In order to enhance the system's easy-recognition nature, a 3D feature is added. Since most of the system is written mainly in JAVA language, we adopted JAVA3D technology, which was easy to incorporate into the current running systems. With this 3D feature, the operator can more easily find the malfunctioning parts of complex instruments, such as LHD vacuum vessels. This feature is also helpful for recognizing physical phenomena. In this paper, we present an example in which the temperature increases of a vacuum vessel after NBI are visualized

  16. Inter-algorithm lesion volumetry comparison of real and 3D simulated lung lesions in CT

    Science.gov (United States)

    Robins, Marthony; Solomon, Justin; Hoye, Jocelyn; Smith, Taylor; Ebner, Lukas; Samei, Ehsan

    2017-03-01

    The purpose of this study was to establish volumetric exchangeability between real and computational lung lesions in CT. We compared the overall relative volume estimation performance of segmentation tools when used to measure real lesions in actual patient CT images and computational lesions virtually inserted into the same patient images (i.e., hybrid datasets). Pathologically confirmed malignancies from 30 thoracic patient cases from Reference Image Database to Evaluate Therapy Response (RIDER) were modeled and used as the basis for the comparison. Lesions included isolated nodules as well as those attached to the pleura or other lung structures. Patient images were acquired using a 16 detector row or 64 detector row CT scanner (Lightspeed 16 or VCT; GE Healthcare). Scans were acquired using standard chest protocols during a single breath-hold. Virtual 3D lesion models based on real lesions were developed in Duke Lesion Tool (Duke University), and inserted using a validated image-domain insertion program. Nodule volumes were estimated using multiple commercial segmentation tools (iNtuition, TeraRecon, Inc., Syngo.via, Siemens Healthcare, and IntelliSpace, Philips Healthcare). Consensus based volume comparison showed consistent trends in volume measurement between real and virtual lesions across all software. The average percent bias (+/- standard error) shows -9.2+/-3.2% for real lesions versus -6.7+/-1.2% for virtual lesions with tool A, 3.9+/-2.5% and 5.0+/-0.9% for tool B, and 5.3+/-2.3% and 1.8+/-0.8% for tool C, respectively. Virtual lesion volumes were statistically similar to those of real lesions (.05 in most cases. Results suggest that hybrid datasets had similar inter-algorithm variability compared to real datasets.

  17. Design and implementation of a 3D ocean virtual reality and visualization engine

    Science.gov (United States)

    Chen, Ge; Li, Bo; Tian, Fenglin; Ji, Pengbo; Li, Wenqing

    2012-12-01

    In this study, a 3D virtual reality and visualization engine for rendering the ocean, named VV-Ocean, is designed for marine applications. The design goals of VV-Ocean aim at high fidelity simulation of ocean environment, visualization of massive and multidimensional marine data, and imitation of marine lives. VV-Ocean is composed of five modules, i.e. memory management module, resources management module, scene management module, rendering process management module and interaction management module. There are three core functions in VV-Ocean: reconstructing vivid virtual ocean scenes, visualizing real data dynamically in real time, imitating and simulating marine lives intuitively. Based on VV-Ocean, we establish a sea-land integration platform which can reproduce drifting and diffusion processes of oil spilling from sea bottom to surface. Environment factors such as ocean current and wind field have been considered in this simulation. On this platform oil spilling process can be abstracted as movements of abundant oil particles. The result shows that oil particles blend with water well and the platform meets the requirement for real-time and interactive rendering. VV-Ocean can be widely used in ocean applications such as demonstrating marine operations, facilitating maritime communications, developing ocean games, reducing marine hazards, forecasting the weather over oceans, serving marine tourism, and so on. Finally, further technological improvements of VV-Ocean are discussed.

  18. 3D super-virtual refraction interferometry

    KAUST Repository

    Lu, Kai; AlTheyab, Abdullah; Schuster, Gerard T.

    2014-01-01

    Super-virtual refraction interferometry enhances the signal-to-noise ratio of far-offset refractions. However, when applied to 3D cases, traditional 2D SVI suffers because the stationary positions of the source-receiver pairs might be any place

  19. Real-Time 3D Tracking and Reconstruction on Mobile Phones.

    Science.gov (United States)

    Prisacariu, Victor Adrian; Kähler, Olaf; Murray, David W; Reid, Ian D

    2015-05-01

    We present a novel framework for jointly tracking a camera in 3D and reconstructing the 3D model of an observed object. Due to the region based approach, our formulation can handle untextured objects, partial occlusions, motion blur, dynamic backgrounds and imperfect lighting. Our formulation also allows for a very efficient implementation which achieves real-time performance on a mobile phone, by running the pose estimation and the shape optimisation in parallel. We use a level set based pose estimation but completely avoid the, typically required, explicit computation of a global distance. This leads to tracking rates of more than 100 Hz on a desktop PC and 30 Hz on a mobile phone. Further, we incorporate additional orientation information from the phone's inertial sensor which helps us resolve the tracking ambiguities inherent to region based formulations. The reconstruction step first probabilistically integrates 2D image statistics from selected keyframes into a 3D volume, and then imposes coherency and compactness using a total variational regularisation term. The global optimum of the overall energy function is found using a continuous max-flow algorithm and we show that, similar to tracking, the integration of per voxel posteriors instead of likelihoods improves the precision and accuracy of the reconstruction.

  20. Simulation of mirror surfaces for virtual estimation of visibility lines for 3D motor vehicle collision reconstruction.

    Science.gov (United States)

    Leipner, Anja; Dobler, Erika; Braun, Marcel; Sieberth, Till; Ebert, Lars

    2017-10-01

    3D reconstructions of motor vehicle collisions are used to identify the causes of these events and to identify potential violations of traffic regulations. Thus far, the reconstruction of mirrors has been a problem since they are often based on approximations or inaccurate data. Our aim with this paper was to confirm that structured light scans of a mirror improve the accuracy of simulating the field of view of mirrors. We analyzed the performances of virtual mirror surfaces based on structured light scans using real mirror surfaces and their reflections as references. We used an ATOS GOM III scanner to scan the mirrors and processed the 3D data using Geomagic Wrap. For scene reconstruction and to generate virtual images, we used 3ds Max. We compared the simulated virtual images and photographs of real scenes using Adobe Photoshop. Our results showed that we achieved clear and even mirror results and that the mirrors behaved as expected. The greatest measured deviation between an original photo and the corresponding virtual image was 20 pixels in the transverse direction for an image width of 4256 pixels. We discussed the influences of data processing and alignment of the 3D models on the results. The study was limited to a distance of 1.6m, and the method was not able to simulate an interior mirror. In conclusion, structured light scans of mirror surfaces can be used to simulate virtual mirror surfaces with regard to 3D motor vehicle collision reconstruction. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Using a 3D virtual supermarket to measure food purchase behavior: a validation study.

    Science.gov (United States)

    Waterlander, Wilma Elzeline; Jiang, Yannan; Steenhuis, Ingrid Hendrika Margaretha; Ni Mhurchu, Cliona

    2015-04-28

    There is increasing recognition that supermarkets are an important environment for health-promoting interventions such as fiscal food policies or front-of-pack nutrition labeling. However, due to the complexities of undertaking such research in the real world, well-designed randomized controlled trials on these kinds of interventions are lacking. The Virtual Supermarket is a 3-dimensional computerized research environment designed to enable experimental studies in a supermarket setting without the complexity or costs normally associated with undertaking such research. The primary objective was to validate the Virtual Supermarket by comparing virtual and real-life food purchasing behavior. A secondary objective was to obtain participant feedback on perceived sense of "presence" (the subjective experience of being in one place or environment even if physically located in another) in the Virtual Supermarket. Eligible main household shoppers (New Zealand adults aged ≥18 years) were asked to conduct 3 shopping occasions in the Virtual Supermarket over 3 consecutive weeks, complete the validated Presence Questionnaire Items Stems, and collect their real supermarket grocery till receipts for that same period. Proportional expenditure (NZ$) and the proportion of products purchased over 18 major food groups were compared between the virtual and real supermarkets. Data were analyzed using repeated measures mixed models. A total of 123 participants consented to take part in the study. In total, 69.9% (86/123) completed 1 shop in the Virtual Supermarket, 64.2% (79/123) completed 2 shops, 60.2% (74/123) completed 3 shops, and 48.8% (60/123) returned their real supermarket till receipts. The 4 food groups with the highest relative expenditures were the same for the virtual and real supermarkets: fresh fruit and vegetables (virtual estimate: 14.3%; real: 17.4%), bread and bakery (virtual: 10.0%; real: 8.2%), dairy (virtual: 19.1%; real: 12.6%), and meat and fish (virtual: 16

  2. Position tracking of moving liver lesion based on real-time registration between 2D ultrasound and 3D preoperative images

    International Nuclear Information System (INIS)

    Weon, Chijun; Hyun Nam, Woo; Lee, Duhgoon; Ra, Jong Beom; Lee, Jae Young

    2015-01-01

    Purpose: Registration between 2D ultrasound (US) and 3D preoperative magnetic resonance (MR) (or computed tomography, CT) images has been studied recently for US-guided intervention. However, the existing techniques have some limits, either in the registration speed or the performance. The purpose of this work is to develop a real-time and fully automatic registration system between two intermodal images of the liver, and subsequently an indirect lesion positioning/tracking algorithm based on the registration result, for image-guided interventions. Methods: The proposed position tracking system consists of three stages. In the preoperative stage, the authors acquire several 3D preoperative MR (or CT) images at different respiratory phases. Based on the transformations obtained from nonrigid registration of the acquired 3D images, they then generate a 4D preoperative image along the respiratory phase. In the intraoperative preparatory stage, they properly attach a 3D US transducer to the patient’s body and fix its pose using a holding mechanism. They then acquire a couple of respiratory-controlled 3D US images. Via the rigid registration of these US images to the 3D preoperative images in the 4D image, the pose information of the fixed-pose 3D US transducer is determined with respect to the preoperative image coordinates. As feature(s) to use for the rigid registration, they may choose either internal liver vessels or the inferior vena cava. Since the latter is especially useful in patients with a diffuse liver disease, the authors newly propose using it. In the intraoperative real-time stage, they acquire 2D US images in real-time from the fixed-pose transducer. For each US image, they select candidates for its corresponding 2D preoperative slice from the 4D preoperative MR (or CT) image, based on the predetermined pose information of the transducer. The correct corresponding image is then found among those candidates via real-time 2D registration based on a

  3. Semi- and virtual 3D dosimetry in clinical practice

    DEFF Research Database (Denmark)

    Korreman, S. S.

    2013-01-01

    In this review, 3D dosimetry is divided in three categories; "true" 3D, semi-3D and virtual 3D. Virtual 3D involves the use of measurement arrays either before or after beam entry in the patient/phantom, whereas semi-3D involves use of measurement arrays in phantoms mimicking the patient. True 3D...... involves the measurement of dose in a volume mimicking the patient.There are different advantages and limitations of all three categories and of systems within these categories. Choice of measurement method in a given case depends on the aim of the measurement, and examples are given of verification...... measurements with various aims....

  4. Creating Machinima (3D) and Real Life Videos in an ESP Classroom

    Science.gov (United States)

    Ochoa Alpala, Carol Anne; Ortíz García, William Ricardo

    2018-01-01

    This research paper reports on the development of oral presentation skills in a 3D virtual world called "Moviestorm" machinima, in contrast with real-life videos. In this way, the implementation of both types of videos sought to promote the improvement of oral communication skills, specifically oral presentations in a foreign language,…

  5. 3D Virtual Reality Check: Learner Engagement and Constructivist Theory

    Science.gov (United States)

    Bair, Richard A.

    2013-01-01

    The inclusion of three-dimensional (3D) virtual tools has created a need to communicate the engagement of 3D tools and specify learning gains that educators and the institutions, which are funding 3D tools, can expect. A review of literature demonstrates that specific models and theories for 3D Virtual Reality (VR) learning do not exist "per…

  6. A comparative analysis of 2D and 3D tasks for virtual reality therapies based on robotic-assisted neurorehabilitation for post-stroke patients

    Directory of Open Access Journals (Sweden)

    Luis Daniel Lledó

    2016-08-01

    Full Text Available Post-stroke neurorehabilitation based on virtual therapies are performed completing repetitive exercises shown in visual electronic devices, whose content represents imaginary or daily life tasks. Currently, there are two ways of visualization of these task. 3D virtual environments are used to get a three dimensional space that represents the real world with a high level of detail, whose realism is determinated by the resolucion and fidelity of the objects of the task. Furthermore, 2D virtual environments are used to represent the tasks with a low degree of realism using techniques of bidimensional graphics. However, the type of visualization can influence the quality of perception of the task, affecting the patient's sensorimotor performance. The purpose of this paper was to evaluate if there were differences in patterns of kinematic movements when post-stroke patients performed a reach task viewing a virtual therapeutic game with two different type of visualization of virtual environment: 2D and 3D. Nine post-stroke patients have participated in the study receiving a virtual therapy assisted by PUPArm rehabilitation robot. Horizontal movements of the upper limb were performed to complete the aim of the tasks, which consist in reaching peripheral or perspective targets depending on the virtual environment shown. Various parameter types such as the maximum speed, reaction time, path length or initial movement are analyzed from the data acquired objectively by the robotic device to evaluate the influence of the task visualization. At the end of the study, a usability survey was provided to each patient to analysis his/her satisfaction level. For all patients, the movement trajectories were enhanced when they completed the therapy. This fact suggests that patient's motor recovery was increased. Despite of the similarity in majority of the kinematic parameters, differences in reaction time and path length were higher using the 3D task. Regarding

  7. The New Realm of 3-D Vision

    Science.gov (United States)

    2002-01-01

    Dimension Technologies Inc., developed a line of 2-D/3-D Liquid Crystal Display (LCD) screens, including a 15-inch model priced at consumer levels. DTI's family of flat panel LCD displays, called the Virtual Window(TM), provide real-time 3-D images without the use of glasses, head trackers, helmets, or other viewing aids. Most of the company initial 3-D display research was funded through NASA's Small Business Innovation Research (SBIR) program. The images on DTI's displays appear to leap off the screen and hang in space. The display accepts input from computers or stereo video sources, and can be switched from 3-D to full-resolution 2-D viewing with the push of a button. The Virtual Window displays have applications in data visualization, medicine, architecture, business, real estate, entertainment, and other research, design, military, and consumer applications. Displays are currently used for computer games, protein analysis, and surgical imaging. The technology greatly benefits the medical field, as surgical simulators are helping to increase the skills of surgical residents. Virtual Window(TM) is a trademark of Dimension Technologies Inc.

  8. 2-D tiles declustering method based on virtual devices

    Science.gov (United States)

    Li, Zhongmin; Gao, Lu

    2009-10-01

    Generally, 2-D spatial data are divided as a series of tiles according to the plane grid. To satisfy the effect of vision, the tiles in the query window including the view point would be displayed quickly at the screen. Aiming at the performance difference of real storage devices, we propose a 2-D tiles declustering method based on virtual device. Firstly, we construct a group of virtual devices which have same storage performance and non-limited capacity, then distribute the tiles into M virtual devices according to the query window of 2-D tiles. Secondly, we equably map the tiles in M virtual devices into M equidistant intervals in [0, 1) using pseudo-random number generator. Finally, we devide [0, 1) into M intervals according to the tiles distribution percentage of every real storage device, and distribute the tiles in each interval in the corresponding real storage device. We have designed and realized a prototype GlobeSIGht, and give some related test results. The results show that the average response time of each tile in the query window including the view point using 2-D tiles declustering method based on virtual device is more efficient than using other methods.

  9. Three-dimensional (3D) real-time conformal brachytherapy - a novel solution for prostate cancer treatment Part I. Rationale and method

    International Nuclear Information System (INIS)

    Fijalkowski, M.; Bialas, B.; Maciejewski, B.; Bystrzycka, J.; Slosarek, K.

    2005-01-01

    Recently, the system for conformal real-time high-dose-rate brachytherapy has been developed and dedicated in general for the treatment of prostate cancer. The aim of this paper is to present the 3D-conformal real-time brachytherapy technique introduced to clinical practice at the Institute of Oncology in Gliwice. Equipment and technique of 3D-conformal real time brachytherapy (3D-CBRT) is presented in detail and compared with conventional high-dose-rate brachytherapy. Step-by-step procedures of treatment planning are described, including own modifications. The 3D-CBRT offers the following advantages: (1) on-line continuous visualization of the prostate and acquisition of the series of NS images during the entire procedure of planning and treatment; (2) high precision of definition and contouring the target volume and the healthy organs at risk (urethra, rectum, bladder) based on 3D transrectal continuous ultrasound images; (3) interactive on-line dose optimization with real-time corrections of the dose-volume histograms (DVHs) till optimal dose distribution is achieved; (4) possibility to overcome internal prostate motion and set-up inaccuracies by stable positioning of the prostate with needles fixed to the template; (5) significant shortening of overall treatment time; (6) cost reduction - the treatment can be provided as an outpatient procedure. The 3D- real time CBRT can be advertised as an ideal conformal boost dose technique integrated or interdigitated with pelvic conformal external beam radiotherapy or as a monotherapy for prostate cancer. (author)

  10. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    OpenAIRE

    Richard Chiou; Yongjin (james) Kwon; Tzu-Liang (bill) Tseng; Robin Kizirian; Yueh-Ting Yang

    2010-01-01

    This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote c...

  11. Real-Time Motion Tracking for Mobile Augmented/Virtual Reality Using Adaptive Visual-Inertial Fusion.

    Science.gov (United States)

    Fang, Wei; Zheng, Lianyu; Deng, Huanjun; Zhang, Hongbo

    2017-05-05

    In mobile augmented/virtual reality (AR/VR), real-time 6-Degree of Freedom (DoF) motion tracking is essential for the registration between virtual scenes and the real world. However, due to the limited computational capacity of mobile terminals today, the latency between consecutive arriving poses would damage the user experience in mobile AR/VR. Thus, a visual-inertial based real-time motion tracking for mobile AR/VR is proposed in this paper. By means of high frequency and passive outputs from the inertial sensor, the real-time performance of arriving poses for mobile AR/VR is achieved. In addition, to alleviate the jitter phenomenon during the visual-inertial fusion, an adaptive filter framework is established to cope with different motion situations automatically, enabling the real-time 6-DoF motion tracking by balancing the jitter and latency. Besides, the robustness of the traditional visual-only based motion tracking is enhanced, giving rise to a better mobile AR/VR performance when motion blur is encountered. Finally, experiments are carried out to demonstrate the proposed method, and the results show that this work is capable of providing a smooth and robust 6-DoF motion tracking for mobile AR/VR in real-time.

  12. Real-time global illumination on mobile device

    Science.gov (United States)

    Ahn, Minsu; Ha, Inwoo; Lee, Hyong-Euk; Kim, James D. K.

    2014-02-01

    We propose a novel method for real-time global illumination on mobile devices. Our approach is based on instant radiosity, which uses a sequence of virtual point lights in order to represent the e ect of indirect illumination. Our rendering process consists of three stages. With the primary light, the rst stage generates a local illumination with the shadow map on GPU The second stage of the global illumination uses the re ective shadow map on GPU and generates the sequence of virtual point lights on CPU. Finally, we use the splatting method of Dachsbacher et al 1 and add the indirect illumination to the local illumination on GPU. With the limited computing resources in mobile devices, a small number of virtual point lights are allowed for real-time rendering. Our approach uses the multi-resolution sampling method with 3D geometry and attributes simultaneously and reduce the total number of virtual point lights. We also use the hybrid strategy, which collaboratively combines the CPUs and GPUs available in a mobile SoC due to the limited computing resources in mobile devices. Experimental results demonstrate the global illumination performance of the proposed method.

  13. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    Directory of Open Access Journals (Sweden)

    Richard Chiou

    2010-06-01

    Full Text Available This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote controlling of the robots. The uniqueness of the project lies in making this process Internet-based, and remote robot operated and visualized in 3D. This 3D system approach provides the students with a more realistic feel of the 3D robotic laboratory even though they are working remotely. As a result, the 3D visualization technology has been tested as part of a laboratory in the MET 205 Robotics and Mechatronics class and has received positive feedback by most of the students. This type of research has introduced a new level of realism and visual communications to online laboratory learning in a remote classroom.

  14. A Spatial Reference Grid for Real-Time Autonomous Underwater Modeling using 3-D Sonar

    Energy Technology Data Exchange (ETDEWEB)

    Auran, P.G.

    1996-12-31

    The offshore industry has recognized the need for intelligent underwater robotic vehicles. This doctoral thesis deals with autonomous underwater vehicles (AUVs) and concentrates on a data representation for real-time image formation and analysis. Its main objective is to develop a 3-D image representation suitable for autonomous perception objectives underwater, assuming active sonar as the main sensor for perception. The main contributions are: (1) A dynamical image representation for 3-D range data, (2) A basic electronic circuit and software system for 3-D sonar sampling and amplitude thresholding, (3) A model for target reliability, (4) An efficient connected components algorithm for 3-D segmentation, (5) A method for extracting general 3-D geometrical representations from segmented echo clusters, (6) Experimental results of planar and curved target modeling. 142 refs., 120 figs., 10 tabs.

  15. Intelligent Open Data 3D Maps in a Collaborative Virtual World

    Directory of Open Access Journals (Sweden)

    Juho-Pekka Virtanen

    2015-05-01

    Full Text Available Three-dimensional (3D maps have many potential applications, such as navigation and urban planning. In this article, we present the use of a 3D virtual world platform Meshmoon to create intelligent open data 3D maps. A processing method is developed to enable the generation of 3D virtual environments from the open data of the National Land Survey of Finland. The article combines the elements needed in contemporary smart city concepts, such as the connection between attribute information and 3D objects, and the creation of collaborative virtual worlds from open data. By using our 3D virtual world platform, it is possible to create up-to-date, collaborative 3D virtual models, which are automatically updated on all viewers. In the scenes, all users are able to interact with the model, and with each other. With the developed processing methods, the creation of virtual world scenes was partially automated for collaboration activities.

  16. Exploring the educational potential of 3D virtual environments

    Directory of Open Access Journals (Sweden)

    Francesc Marc ESTEVE MON

    2013-12-01

    Full Text Available 3D virtual environments are advanced technology systems, with some potentialities in the teaching and learning process.In recent years, different institutions have promoted the acquisition of XXI century skills. Competences such as initiative, teamwork, creativity, flexibility or digital literacy.Multi-user virtual environments, sometimes called virtual worlds or 3D simulators, are immersive, interactive, customizable, accessible and programmable systems. This kind of environments allow to design educational complex activities to develop these key competences. For this purpose it’s necessary to set an appropriate teaching strategy to put this knowledge and skills into action, and design suitable mechanisms for registration and systematization. This paper analyzes the potential of these environments and presents two experiences in 3D virtual environments: (1 to develop teamwork and self-management skills, and (2 to assess digital literacy in preservice teachers.

  17. 3D virtual world remote laboratory to assist in designing advanced user defined DAQ systems based on FlexRIO and EPICS

    Energy Technology Data Exchange (ETDEWEB)

    Carpeño, A., E-mail: antonio.cruiz@upm.es [Universidad Politécnica de Madrid UPM, Madrid (Spain); Contreras, D.; López, S.; Ruiz, M.; Sanz, D.; Arcas, G. de; Esquembri, S. [Universidad Politécnica de Madrid UPM, Madrid (Spain); Vega, J.; Castro, R. [Laboratorio Nacional de Fusión CIEMAT, Madrid (Spain)

    2016-11-15

    Highlights: • Assist in the design of FPGA-based data acquisition systems using EPICS and FlexRIO. • Virtual Reality technologies are highly effective at creating rich training scenarios. • Virtual actions simulate the behavior of a real system to enhance the training process. • Virtual actions can make real changes remotely in the physical ITER’s Fast Controller. - Abstract: iRIO-3DLab is a platform devised to assist developers in the design and implementation of intelligent and reconfigurable FPGA-based data acquisition systems using EPICS and FlexRIO technologies. Although these architectures are very powerful in defining the behavior of DAQ systems, this advantage comes at the price of greater difficulty in understanding how the system works, and how it should be configured and built according to the hardware available and the processing demanded by the requirements of the diagnostics. In this regard, Virtual Reality technologies are highly effective at creating rich training scenarios due to their ability to provide immersive training experiences and collaborative environments. The designed remote laboratory is based on a 3D virtual world developed in Opensim, which is accessible through a standard free 3D viewer. Using a client-server architecture, the virtual world connects with a service running in a Linux-based computer executing EPICS. Through their avatars, users interact with virtual replicas of this equipment as they would in real-life situations. Some actions can be used to simulate the behavior of a real system to enhance the training process, while others can be used to make real changes remotely in the physical system.

  18. 3D virtual world remote laboratory to assist in designing advanced user defined DAQ systems based on FlexRIO and EPICS

    International Nuclear Information System (INIS)

    Carpeño, A.; Contreras, D.; López, S.; Ruiz, M.; Sanz, D.; Arcas, G. de; Esquembri, S.; Vega, J.; Castro, R.

    2016-01-01

    Highlights: • Assist in the design of FPGA-based data acquisition systems using EPICS and FlexRIO. • Virtual Reality technologies are highly effective at creating rich training scenarios. • Virtual actions simulate the behavior of a real system to enhance the training process. • Virtual actions can make real changes remotely in the physical ITER’s Fast Controller. - Abstract: iRIO-3DLab is a platform devised to assist developers in the design and implementation of intelligent and reconfigurable FPGA-based data acquisition systems using EPICS and FlexRIO technologies. Although these architectures are very powerful in defining the behavior of DAQ systems, this advantage comes at the price of greater difficulty in understanding how the system works, and how it should be configured and built according to the hardware available and the processing demanded by the requirements of the diagnostics. In this regard, Virtual Reality technologies are highly effective at creating rich training scenarios due to their ability to provide immersive training experiences and collaborative environments. The designed remote laboratory is based on a 3D virtual world developed in Opensim, which is accessible through a standard free 3D viewer. Using a client-server architecture, the virtual world connects with a service running in a Linux-based computer executing EPICS. Through their avatars, users interact with virtual replicas of this equipment as they would in real-life situations. Some actions can be used to simulate the behavior of a real system to enhance the training process, while others can be used to make real changes remotely in the physical system.

  19. Demo: Distributed Real-Time Generative 3D Hand Tracking using Edge GPGPU Acceleration

    DEFF Research Database (Denmark)

    Qammaz, Ammar; Kosta, Sokol; Kyriazis, Nikolaos

    2018-01-01

    computations locally. The network connection takes the place of a GPGPU accelerator and sharing resources with a larger workstation becomes the acceleration mechanism. The unique properties of a generative optimizer are examined and constitute a challenging use-case, since the requirement for real......This work demonstrates a real-time 3D hand tracking application that runs via computation offloading. The proposed framework enables the application to run on low-end mobile devices such as laptops and tablets, despite the fact that they lack the sufficient hardware to perform the required...

  20. ArtifactVis2: Managing real-time archaeological data in immersive 3D environments

    KAUST Repository

    Smith, Neil; Knabb, Kyle; Defanti, Connor; Weber, Philip P.; Schulze, Jü rgen P.; Prudhomme, Andrew; Kuester, Falko; Levy, Thomas E.; Defanti, Thomas A.

    2013-01-01

    In this paper, we present a stereoscopic research and training environment for archaeologists called ArtifactVis2. This application enables the management and visualization of diverse types of cultural datasets within a collaborative virtual 3D

  1. Virtual Cerebral Aneurysm Clipping with Real-Time Haptic Force Feedback in Neurosurgical Education.

    Science.gov (United States)

    Gmeiner, Matthias; Dirnberger, Johannes; Fenz, Wolfgang; Gollwitzer, Maria; Wurm, Gabriele; Trenkler, Johannes; Gruber, Andreas

    2018-04-01

    Realistic, safe, and efficient modalities for simulation-based training are highly warranted to enhance the quality of surgical education, and they should be incorporated in resident training. The aim of this study was to develop a patient-specific virtual cerebral aneurysm-clipping simulator with haptic force feedback and real-time deformation of the aneurysm and vessels. A prototype simulator was developed from 2012 to 2016. Evaluation of virtual clipping by blood flow simulation was integrated in this software, and the prototype was evaluated by 18 neurosurgeons. In 4 patients with different medial cerebral artery aneurysms, virtual clipping was performed after real-life surgery, and surgical results were compared regarding clip application, surgical trajectory, and blood flow. After head positioning and craniotomy, bimanual virtual aneurysm clipping with an original forceps was performed. Blood flow simulation demonstrated residual aneurysm filling or branch stenosis. The simulator improved anatomic understanding for 89% of neurosurgeons. Simulation of head positioning and craniotomy was considered realistic by 89% and 94% of users, respectively. Most participants agreed that this simulator should be integrated into neurosurgical education (94%). Our illustrative cases demonstrated that virtual aneurysm surgery was possible using the same trajectory as in real-life cases. Both virtual clipping and blood flow simulation were realistic in broad-based but not calcified aneurysms. Virtual clipping of a calcified aneurysm could be performed using the same surgical trajectory, but not the same clip type. We have successfully developed a virtual aneurysm-clipping simulator. Next, we will prospectively evaluate this device for surgical procedure planning and education. Copyright © 2018 Elsevier Inc. All rights reserved.

  2. The Virtual Dressing Room

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft

    2013-01-01

    This paper presents a review of recent developments and future perspectives, addressing the problem of creating a virtual dressing room. First, we review the current state-of-the-art of exiting solutions and discuss their applicability and limitations. We categorize the exiting solutions into three...... kinds: (1) virtual real-time 2D image/video techniques, where the consumer gets to superimpose the clothes on their real-time video to visualize themselves wearing the clothes. (2) 2D and 3D mannequins, where a web-application uses the body measurements provided by the customer, to superimpose...... and their demands to a virtual dressing room....

  3. Time-Predictable Virtual Memory

    DEFF Research Database (Denmark)

    Puffitsch, Wolfgang; Schoeberl, Martin

    2016-01-01

    Virtual memory is an important feature of modern computer architectures. For hard real-time systems, memory protection is a particularly interesting feature of virtual memory. However, current memory management units are not designed for time-predictability and therefore cannot be used...... in such systems. This paper investigates the requirements on virtual memory from the perspective of hard real-time systems and presents the design of a time-predictable memory management unit. Our evaluation shows that the proposed design can be implemented efficiently. The design allows address translation...... and address range checking in constant time of two clock cycles on a cache miss. This constant time is in strong contrast to the possible cost of a miss in a translation look-aside buffer in traditional virtual memory organizations. Compared to a platform without a memory management unit, these two additional...

  4. An experiment of a 3D real-time robust visual odometry for intelligent vehicles

    OpenAIRE

    Rodriguez Florez , Sergio Alberto; Fremont , Vincent; Bonnifait , Philippe

    2009-01-01

    International audience; Vision systems are nowadays very promising for many on-board vehicles perception functionalities, like obstacles detection/recognition and ego-localization. In this paper, we present a 3D visual odometric method that uses a stereo-vision system to estimate the 3D ego-motion of a vehicle in outdoor road conditions. In order to run in real-time, the studied technique is sparse meaning that it makes use of feature points that are tracked during several frames. A robust sc...

  5. Real Time Animation of Virtual Humans: A Trade-off Between Naturalness and Control

    NARCIS (Netherlands)

    van Welbergen, H.; van Basten, B.J.H.; Egges, A.; Ruttkay, Z.M.; Overmars, M.H.

    2010-01-01

    Virtual humans are employed in many interactive applications using 3D virtual environments, including (serious) games. The motion of such virtual humans should look realistic (or ‘natural’) and allow interaction with the surroundings and other (virtual) humans. Current animation techniques differ in

  6. Collaborative Virtual 3D Environment for Internet-Accessible Physics Experiments

    Directory of Open Access Journals (Sweden)

    Bettina Scheucher

    2009-08-01

    Full Text Available Abstract—Immersive 3D worlds have increasingly raised the interest of researchers and practitioners for various learning and training settings over the last decade. These virtual worlds can provide multiple communication channels between users and improve presence and awareness in the learning process. Consequently virtual 3D environments facilitate collaborative learning and training scenarios. In this paper we focus on the integration of internet-accessible physics experiments (iLabs combined with the TEALsim 3D simulation toolkit in Project Wonderland, Sun's toolkit for creating collaborative 3D virtual worlds. Within such a collaborative environment these tools provide the opportunity for teachers and students to work together as avatars as they control actual equipment, visualize physical phenomenon generated by the experiment, and discuss the results. In particular we will outline the steps of integration, future goals, as well as the value of a collaboration space in Wonderland's virtual world.

  7. Augmented reality during robot-assisted laparoscopic partial nephrectomy: toward real-time 3D-CT to stereoscopic video registration.

    Science.gov (United States)

    Su, Li-Ming; Vagvolgyi, Balazs P; Agarwal, Rahul; Reiley, Carol E; Taylor, Russell H; Hager, Gregory D

    2009-04-01

    To investigate a markerless tracking system for real-time stereo-endoscopic visualization of preoperative computed tomographic imaging as an augmented display during robot-assisted laparoscopic partial nephrectomy. Stereoscopic video segments of a patient undergoing robot-assisted laparoscopic partial nephrectomy for tumor and another for a partial staghorn renal calculus were processed to evaluate the performance of a three-dimensional (3D)-to-3D registration algorithm. After both cases, we registered a segment of the video recording to the corresponding preoperative 3D-computed tomography image. After calibrating the camera and overlay, 3D-to-3D registration was created between the model and the surgical recording using a modified iterative closest point technique. Image-based tracking technology tracked selected fixed points on the kidney surface to augment the image-to-model registration. Our investigation has demonstrated that we can identify and track the kidney surface in real time when applied to intraoperative video recordings and overlay the 3D models of the kidney, tumor (or stone), and collecting system semitransparently. Using a basic computer research platform, we achieved an update rate of 10 Hz and an overlay latency of 4 frames. The accuracy of the 3D registration was 1 mm. Augmented reality overlay of reconstructed 3D-computed tomography images onto real-time stereo video footage is possible using iterative closest point and image-based surface tracking technology that does not use external navigation tracking systems or preplaced surface markers. Additional studies are needed to assess the precision and to achieve fully automated registration and display for intraoperative use.

  8. On the Feasibility of Real-Time 3D Hand Tracking using Edge GPGPU Acceleration

    DEFF Research Database (Denmark)

    Qammaz, A.; Kosta, S.; Kyriazis, N.

    2018-01-01

    This paper presents the case study of a non-intrusive porting of a monolithic C++ library for real-time 3D hand tracking, to the domain of edge-based computation. Towards a proof of concept, the case study considers a pair of workstations, a computationally powerful and a computationally weak one...

  9. Generating classes of 3D virtual mandibles for AR-based medical simulation.

    Science.gov (United States)

    Hippalgaonkar, Neha R; Sider, Alexa D; Hamza-Lup, Felix G; Santhanam, Anand P; Jaganathan, Bala; Imielinska, Celina; Rolland, Jannick P

    2008-01-01

    Simulation and modeling represent promising tools for several application domains from engineering to forensic science and medicine. Advances in 3D imaging technology convey paradigms such as augmented reality (AR) and mixed reality inside promising simulation tools for the training industry. Motivated by the requirement for superimposing anatomically correct 3D models on a human patient simulator (HPS) and visualizing them in an AR environment, the purpose of this research effort was to develop and validate a method for scaling a source human mandible to a target human mandible within a 2 mm root mean square (RMS) error. Results show that, given a distance between 2 same landmarks on 2 different mandibles, a relative scaling factor may be computed. Using this scaling factor, results show that a 3D virtual mandible model can be made morphometrically equivalent to a real target-specific mandible within a 1.30 mm RMS error. The virtual mandible may be further used as a reference target for registering other anatomic models, such as the lungs, on the HPS. Such registration will be made possible by physical constraints among the mandible and the spinal column in the horizontal normal rest position.

  10. Organizational Learning Goes Virtual?: A Study of Employees' Learning Achievement in Stereoscopic 3D Virtual Reality

    Science.gov (United States)

    Lau, Kung Wong

    2015-01-01

    Purpose: This study aims to deepen understanding of the use of stereoscopic 3D technology (stereo3D) in facilitating organizational learning. The emergence of advanced virtual technologies, in particular to the stereo3D virtual reality, has fundamentally changed the ways in which organizations train their employees. However, in academic or…

  11. Virtual hand: a 3D tactile interface to virtual environments

    Science.gov (United States)

    Rogowitz, Bernice E.; Borrel, Paul

    2008-02-01

    We introduce a novel system that allows users to experience the sensation of touch in a computer graphics environment. In this system, the user places his/her hand on an array of pins, which is moved about space on a 6 degree-of-freedom robot arm. The surface of the pins defines a surface in the virtual world. This "virtual hand" can move about the virtual world. When the virtual hand encounters an object in the virtual world, the heights of the pins are adjusted so that they represent the object's shape, surface, and texture. A control system integrates pin and robot arm motions to transmit information about objects in the computer graphics world to the user. It also allows the user to edit, change and move the virtual objects, shapes and textures. This system provides a general framework for touching, manipulating, and modifying objects in a 3-D computer graphics environment, which may be useful in a wide range of applications, including computer games, computer aided design systems, and immersive virtual worlds.

  12. Novel interactive virtual showcase based on 3D multitouch technology

    Science.gov (United States)

    Yang, Tao; Liu, Yue; Lu, You; Wang, Yongtian

    2009-11-01

    A new interactive virtual showcase is proposed in this paper. With the help of virtual reality technology, the user of the proposed system can watch the virtual objects floating in the air from all four sides and interact with the virtual objects by touching the four surfaces of the virtual showcase. Unlike traditional multitouch system, this system cannot only realize multi-touch on a plane to implement 2D translation, 2D scaling, and 2D rotation of the objects; it can also realize the 3D interaction of the virtual objects by recognizing and analyzing the multi-touch that can be simultaneously captured from the four planes. Experimental results show the potential of the proposed system to be applied in the exhibition of historical relics and other precious goods.

  13. IPS – A SYSTEM FOR REAL-TIME NAVIGATION AND 3D MODELING

    Directory of Open Access Journals (Sweden)

    D. Grießbach

    2012-07-01

    Full Text Available fdaReliable navigation and 3D modeling is a necessary requirement for any autonomous system in real world scenarios. German Aerospace Center (DLR developed a system providing precise information about local position and orientation of a mobile platform as well as three-dimensional information about its environment in real-time. This system, called Integral Positioning System (IPS can be applied for indoor environments and outdoor environments. To achieve high precision, reliability, integrity and availability a multi-sensor approach was chosen. The important role of sensor data synchronization, system calibration and spatial referencing is emphasized because the data from several sensors has to be fused using a Kalman filter. A hardware operating system (HW-OS is presented, that facilitates the low-level integration of different interfaces. The benefit of this approach is an increased precision of synchronization at the expense of additional engineering costs. It will be shown that the additional effort is leveraged by the new design concept since the HW-OS methodology allows a proven, flexible and fast design process, a high re-usability of common components and consequently a higher reliability within the low-level sensor fusion. Another main focus of the paper is on IPS software. The DLR developed, implemented and tested a flexible and extensible software concept for data grabbing, efficient data handling, data preprocessing (e.g. image rectification being essential for thematic data processing. Standard outputs of IPS are a trajectory of the moving platform and a high density 3D point cloud of the current environment. This information is provided in real-time. Based on these results, information processing on more abstract levels can be executed.

  14. METHODOLOGY TO CREATE DIGITAL AND VIRTUAL 3D ARTEFACTS IN ARCHAEOLOGY

    Directory of Open Access Journals (Sweden)

    Calin Neamtu

    2016-12-01

    Full Text Available The paper presents a methodology to create 3D digital and virtual artefacts in the field of archaeology using CAD software solution. The methodology includes the following steps: the digitalization process, the digital restoration and the dissemination process within a virtual environment. The resulted 3D digital artefacts have to be created in files formats that are compatible with a large variety of operating systems and hardware configurations such as: computers, graphic tablets and smartphones. The compatibility and portability of these 3D file formats has led to a series of quality related compromises to the 3D models in order to integrate them on in a wide variety of application that are running on different hardware configurations. The paper illustrates multiple virtual reality and augmented reality application that make use of the virtual 3D artefacts that have been generated using this methodology.

  15. Real-time 3D vectorcardiography: an application for didactic use

    International Nuclear Information System (INIS)

    Daniel, G; Lissa, G; Redondo, D Medina; Vasquez, L; Zapata, D

    2007-01-01

    The traditional approach to teach the physiological basis of electrocardiography, based only on textbooks, turns out to be insufficient or confusing for students of biomedical sciences. The addition of laboratory practice to the curriculum enables students to approach theoretical aspects from a hands-on experience, resulting in a more efficient and deeper knowledge of the phenomena of interest. Here, we present the development of a PC-based application meant to facilitate the understanding of cardiac bioelectrical phenomena by visualizing in real time the instantaneous 3D cardiac vector. The system uses 8 standard leads from a 12-channel electrocardiograph. The application interface has pedagogic objectives, and facilitates the observation of cardiac depolarization and repolarization and its temporal relationship with the ECG, making it simpler to interpret

  16. 3D super-virtual refraction interferometry

    KAUST Repository

    Lu, Kai

    2014-08-05

    Super-virtual refraction interferometry enhances the signal-to-noise ratio of far-offset refractions. However, when applied to 3D cases, traditional 2D SVI suffers because the stationary positions of the source-receiver pairs might be any place along the recording plane, not just along a receiver line. Moreover, the effect of enhancing the SNR can be limited because of the limitations in the number of survey lines, irregular line geometries, and azimuthal range of arrivals. We have developed a 3D SVI method to overcome these problems. By integrating along the source or receiver lines, the cross-correlation or the convolution result of a trace pair with the source or receiver at the stationary position can be calculated without the requirement of knowing the stationary locations. In addition, the amplitudes of the cross-correlation and convolution results are largely strengthened by integration, which is helpful to further enhance the SNR. In this paper, both synthetic and field data examples are presented, demonstrating that the super-virtual refractions generated by our method have accurate traveltimes and much improved SNR.

  17. Interactive Space(s) -- the CTSG: bridging the real and virtual

    NARCIS (Netherlands)

    Eliëns, A.P.W.; Mao, W.; Vermeersch, L

    2010-01-01

    In this paper, ideas will be presented how to realize games or playful activities in interactive space(s), having a real (spatial) component as well as a representation in virtual 2D or 3D space, by means of web pages and/or online games. Apart from general design criteria, the paper discusses a

  18. Interactive 3D visualization for theoretical virtual observatories

    Science.gov (United States)

    Dykes, T.; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-06-01

    Virtual observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of data sets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2D or volume rendering in 3D. We analyse the current state of 3D visualization for big theoretical astronomical data sets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3D visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based data sets, allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  19. Interactive 3D Visualization for Theoretical Virtual Observatories

    Science.gov (United States)

    Dykes, Tim; Hassan, A.; Gheller, C.; Croton, D.; Krokos, M.

    2018-04-01

    Virtual Observatories (VOs) are online hubs of scientific knowledge. They encompass a collection of platforms dedicated to the storage and dissemination of astronomical data, from simple data archives to e-research platforms offering advanced tools for data exploration and analysis. Whilst the more mature platforms within VOs primarily serve the observational community, there are also services fulfilling a similar role for theoretical data. Scientific visualization can be an effective tool for analysis and exploration of datasets made accessible through web platforms for theoretical data, which often contain spatial dimensions and properties inherently suitable for visualization via e.g. mock imaging in 2d or volume rendering in 3d. We analyze the current state of 3d visualization for big theoretical astronomical datasets through scientific web portals and virtual observatory services. We discuss some of the challenges for interactive 3d visualization and how it can augment the workflow of users in a virtual observatory context. Finally we showcase a lightweight client-server visualization tool for particle-based datasets allowing quantitative visualization via data filtering, highlighting two example use cases within the Theoretical Astrophysical Observatory.

  20. Real-time physics-based 3D biped character animation using an inverted pendulum model.

    Science.gov (United States)

    Tsai, Yao-Yang; Lin, Wen-Chieh; Cheng, Kuangyou B; Lee, Jehee; Lee, Tong-Yee

    2010-01-01

    We present a physics-based approach to generate 3D biped character animation that can react to dynamical environments in real time. Our approach utilizes an inverted pendulum model to online adjust the desired motion trajectory from the input motion capture data. This online adjustment produces a physically plausible motion trajectory adapted to dynamic environments, which is then used as the desired motion for the motion controllers to track in dynamics simulation. Rather than using Proportional-Derivative controllers whose parameters usually cannot be easily set, our motion tracking adopts a velocity-driven method which computes joint torques based on the desired joint angular velocities. Physically correct full-body motion of the 3D character is computed in dynamics simulation using the computed torques and dynamical model of the character. Our experiments demonstrate that tracking motion capture data with real-time response animation can be achieved easily. In addition, physically plausible motion style editing, automatic motion transition, and motion adaptation to different limb sizes can also be generated without difficulty.

  1. Esophagogastric Junction pressure morphology: comparison between a station pull-through and real-time 3D-HRM representation.

    Science.gov (United States)

    Nicodème, F; Lin, Z; Pandolfino, J E; Kahrilas, P J

    2013-09-01

    Esophagogastric junction (EGJ) competence is the fundamental defense against reflux making it of great clinical significance. However, characterizing EGJ competence with conventional manometric methodologies has been confounded by its anatomic and physiological complexity. Recent technological advances in miniaturization and electronics have led to the development of a novel device that may overcome these challenges. Nine volunteer subjects were studied with a novel 3D-HRM device providing 7.5 mm axial and 45° radial pressure resolution within the EGJ. Real-time measurements were made at rest and compared to simulations of a conventional pull-through made with the same device. Moreover, 3D-HRM recordings were analyzed to differentiate contributing pressure signals within the EGJ attributable to lower esophageal sphincter (LES), diaphragm, and vasculature. 3D-HRM recordings suggested that sphincter length assessed by a pull-through method greatly exaggerated the estimate of LES length by failing to discriminate among circumferential contractile pressure and asymmetric extrinsic pressure signals attributable to diaphragmatic and vascular structures. Real-time 3D EGJ recordings found that the dominant constituents of EGJ pressure at rest were attributable to the diaphragm. 3D-HRM permits real-time recording of EGJ pressure morphology facilitating analysis of the EGJ constituents responsible for its function as a reflux barrier making it a promising tool in the study of GERD pathophysiology. The enhanced axial and radial recording resolution of the device should facilitate further studies to explore perturbations in the physiological constituents of EGJ pressure in health and disease. © 2013 John Wiley & Sons Ltd.

  2. D Model Visualization Enhancements in Real-Time Game Engines

    Science.gov (United States)

    Merlo, A.; Sánchez Belenguer, C.; Vendrell Vidal, E.; Fantini, F.; Aliperta, A.

    2013-02-01

    This paper describes two procedures used to disseminate tangible cultural heritage through real-time 3D simulations providing accurate-scientific representations. The main idea is to create simple geometries (with low-poly count) and apply two different texture maps to them: a normal map and a displacement map. There are two ways to achieve models that fit with normal or displacement maps: with the former (normal maps), the number of polygons in the reality-based model may be dramatically reduced by decimation algorithms and then normals may be calculated by rendering them to texture solutions (baking). With the latter, a LOD model is needed; its topology has to be quad-dominant for it to be converted to a good quality subdivision surface (with consistent tangency and curvature all over). The subdivision surface is constructed using methodologies for the construction of assets borrowed from character animation: these techniques have been recently implemented in many entertainment applications known as "retopology". The normal map is used as usual, in order to shade the surface of the model in a realistic way. The displacement map is used to finish, in real-time, the flat faces of the object, by adding the geometric detail missing in the low-poly models. The accuracy of the resulting geometry is progressively refined based on the distance from the viewing point, so the result is like a continuous level of detail, the only difference being that there is no need to create different 3D models for one and the same object. All geometric detail is calculated in real-time according to the displacement map. This approach can be used in Unity, a real-time 3D engine originally designed for developing computer games. It provides a powerful rendering engine, fully integrated with a complete set of intuitive tools and rapid workflows that allow users to easily create interactive 3D contents. With the release of Unity 4.0, new rendering features have been added, including Direct

  3. COGNITIVE ASPECTS OF COLLABORATION IN 3D VIRTUAL ENVIRONMENTS

    Directory of Open Access Journals (Sweden)

    V. Juřík

    2016-06-01

    Full Text Available Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators’ actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators’ responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators’ strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  4. Cognitive Aspects of Collaboration in 3d Virtual Environments

    Science.gov (United States)

    Juřík, V.; Herman, L.; Kubíček, P.; Stachoň, Z.; Šašinka, Č.

    2016-06-01

    Human-computer interaction has entered the 3D era. The most important models representing spatial information — maps — are transferred into 3D versions regarding the specific content to be displayed. Virtual worlds (VW) become promising area of interest because of possibility to dynamically modify content and multi-user cooperation when solving tasks regardless to physical presence. They can be used for sharing and elaborating information via virtual images or avatars. Attractiveness of VWs is emphasized also by possibility to measure operators' actions and complex strategies. Collaboration in 3D environments is the crucial issue in many areas where the visualizations are important for the group cooperation. Within the specific 3D user interface the operators' ability to manipulate the displayed content is explored regarding such phenomena as situation awareness, cognitive workload and human error. For such purpose, the VWs offer a great number of tools for measuring the operators' responses as recording virtual movement or spots of interest in the visual field. Study focuses on the methodological issues of measuring the usability of 3D VWs and comparing them with the existing principles of 2D maps. We explore operators' strategies to reach and interpret information regarding the specific type of visualization and different level of immersion.

  5. The Engelbourg's ruins: from 3D TLS point cloud acquisition to 3D virtual and historic models

    Science.gov (United States)

    Koehl, Mathieu; Berger, Solveig; Nobile, Sylvain

    2014-05-01

    . The 3D model integrated into a GIS is now a precious means of communication for the valuation of the site. Accessible to all, including to the distant people, he allows discover the castle and his history in an educational and relevant way. From an archaeological point of view, the 3D model brings an overall view and a backward movement on the constitution of the site, which a 2D document cannot easily offer. The 3D navigation and the integration of 2D data in the model allow analyze vestiges in another way, contributing to the faster establishment of new hypotheses. Complementary to other methods already exploited in archaeology, the analysis by the 3D vision is, for the scientists, a significant saving of time which they can so dedicate to the more thorough study of certain put aside hypotheses. In parallel, we created several panoramas, and set up a virtual and interactive visit of the site. In the optics to perpetuate this project, and to offer to the future users the ways to continue and to update this study, we tested and set up the methodologies of processing. We were so able to release procedures clear, orderly and applicable as well to the case of Engelbourg as to other similar studies. At least, some hypotheses permits to reconstruct virtually first versions of the original state of the castle.

  6. Application of computer virtual simulation technology in 3D animation production

    Science.gov (United States)

    Mo, Can

    2017-11-01

    In the continuous development of computer technology, the application system of virtual simulation technology has been further optimized and improved. It also has been widely used in various fields of social development, such as city construction, interior design, industrial simulation and tourism teaching etc. This paper mainly introduces the virtual simulation technology used in 3D animation. Based on analyzing the characteristics of virtual simulation technology, the application ways and means of this technology in 3D animation are researched. The purpose is to provide certain reference for the 3D effect promotion days after.

  7. 3D Virtual Learning Environments in Education: A Meta-Review

    Science.gov (United States)

    Reisoglu, I.; Topu, B.; Yilmaz, R.; Karakus Yilmaz, T.; Göktas, Y.

    2017-01-01

    The aim of this study is to investigate recent empirical research studies about 3D virtual learning environments. A total of 167 empirical studies that involve the use of 3D virtual worlds in education were examined by meta-review. Our findings show that the "Second Life" platform has been frequently used in studies. Among the reviewed…

  8. 3D Flow visualization in virtual reality

    Science.gov (United States)

    Pietraszewski, Noah; Dhillon, Ranbir; Green, Melissa

    2017-11-01

    By viewing fluid dynamic isosurfaces in virtual reality (VR), many of the issues associated with the rendering of three-dimensional objects on a two-dimensional screen can be addressed. In addition, viewing a variety of unsteady 3D data sets in VR opens up novel opportunities for education and community outreach. In this work, the vortex wake of a bio-inspired pitching panel was visualized using a three-dimensional structural model of Q-criterion isosurfaces rendered in virtual reality using the HTC Vive. Utilizing the Unity cross-platform gaming engine, a program was developed to allow the user to control and change this model's position and orientation in three-dimensional space. In addition to controlling the model's position and orientation, the user can ``scroll'' forward and backward in time to analyze the formation and shedding of vortices in the wake. Finally, the user can toggle between different quantities, while keeping the time step constant, to analyze flow parameter relationships at specific times during flow development. The information, data, or work presented herein was funded in part by an award from NYS Department of Economic Development (DED) through the Syracuse Center of Excellence.

  9. Analysis of scalability of high-performance 3D image processing platform for virtual colonoscopy.

    Science.gov (United States)

    Yoshida, Hiroyuki; Wu, Yin; Cai, Wenli

    2014-03-19

    One of the key challenges in three-dimensional (3D) medical imaging is to enable the fast turn-around time, which is often required for interactive or real-time response. This inevitably requires not only high computational power but also high memory bandwidth due to the massive amount of data that need to be processed. For this purpose, we previously developed a software platform for high-performance 3D medical image processing, called HPC 3D-MIP platform, which employs increasingly available and affordable commodity computing systems such as the multicore, cluster, and cloud computing systems. To achieve scalable high-performance computing, the platform employed size-adaptive, distributable block volumes as a core data structure for efficient parallelization of a wide range of 3D-MIP algorithms, supported task scheduling for efficient load distribution and balancing, and consisted of a layered parallel software libraries that allow image processing applications to share the common functionalities. We evaluated the performance of the HPC 3D-MIP platform by applying it to computationally intensive processes in virtual colonoscopy. Experimental results showed a 12-fold performance improvement on a workstation with 12-core CPUs over the original sequential implementation of the processes, indicating the efficiency of the platform. Analysis of performance scalability based on the Amdahl's law for symmetric multicore chips showed the potential of a high performance scalability of the HPC 3D-MIP platform when a larger number of cores is available.

  10. Real-time registration of 3D to 2D ultrasound images for image-guided prostate biopsy.

    Science.gov (United States)

    Gillies, Derek J; Gardi, Lori; De Silva, Tharindu; Zhao, Shuang-Ren; Fenster, Aaron

    2017-09-01

    During image-guided prostate biopsy, needles are targeted at tissues that are suspicious of cancer to obtain specimen for histological examination. Unfortunately, patient motion causes targeting errors when using an MR-transrectal ultrasound (TRUS) fusion approach to augment the conventional biopsy procedure. This study aims to develop an automatic motion correction algorithm approaching the frame rate of an ultrasound system to be used in fusion-based prostate biopsy systems. Two modes of operation have been investigated for the clinical implementation of the algorithm: motion compensation using a single user initiated correction performed prior to biopsy, and real-time continuous motion compensation performed automatically as a background process. Retrospective 2D and 3D TRUS patient images acquired prior to biopsy gun firing were registered using an intensity-based algorithm utilizing normalized cross-correlation and Powell's method for optimization. 2D and 3D images were downsampled and cropped to estimate the optimal amount of image information that would perform registrations quickly and accurately. The optimal search order during optimization was also analyzed to avoid local optima in the search space. Error in the algorithm was computed using target registration errors (TREs) from manually identified homologous fiducials in a clinical patient dataset. The algorithm was evaluated for real-time performance using the two different modes of clinical implementations by way of user initiated and continuous motion compensation methods on a tissue mimicking prostate phantom. After implementation in a TRUS-guided system with an image downsampling factor of 4, the proposed approach resulted in a mean ± std TRE and computation time of 1.6 ± 0.6 mm and 57 ± 20 ms respectively. The user initiated mode performed registrations with in-plane, out-of-plane, and roll motions computation times of 108 ± 38 ms, 60 ± 23 ms, and 89 ± 27 ms, respectively, and corresponding

  11. DEVELOPMENT OF A VIRTUAL MUSEUM INCLUDING A 4D PRESENTATION OF BUILDING HISTORY IN VIRTUAL REALITY

    OpenAIRE

    T. P. Kersten; F. Tschirschwitz; S. Deggim

    2017-01-01

    In the last two decades the definition of the term “virtual museum” changed due to rapid technological developments. Using today’s available 3D technologies a virtual museum is no longer just a presentation of collections on the Internet or a virtual tour of an exhibition using panoramic photography. On one hand, a virtual museum should enhance a museum visitor's experience by providing access to additional materials for review and knowledge deepening either before or after the real ...

  12. Virtual 3D planning of tracheostomy placement and clinical applicability of 3D cannula design: a three-step study.

    Science.gov (United States)

    de Kleijn, Bertram J; Kraeima, Joep; Wachters, Jasper E; van der Laan, Bernard F A M; Wedman, Jan; Witjes, M J H; Halmos, Gyorgy B

    2018-02-01

    We aimed to investigate the potential of 3D virtual planning of tracheostomy tube placement and 3D cannula design to prevent tracheostomy complications due to inadequate cannula position. 3D models of commercially available cannula were positioned in 3D models of the airway. In study (1), a cohort that underwent tracheostomy between 2013 and 2015 was selected (n = 26). The cannula was virtually placed in the airway in the pre-operative CT scan and its position was compared to the cannula position on post-operative CT scans. In study (2), a cohort with neuromuscular disease (n = 14) was analyzed. Virtual cannula placing was performed in CT scans and tested if problems could be anticipated. Finally (3), for a patient with Duchenne muscular dystrophy and complications of conventional tracheostomy cannula, a patient-specific cannula was 3D designed, fabricated, and placed. (1) The 3D planned and post-operative tracheostomy position differed significantly. (2) Three groups of patients were identified: (A) normal anatomy; (B) abnormal anatomy, commercially available cannula fits; and (C) abnormal anatomy, custom-made cannula, may be necessary. (3) The position of the custom-designed cannula was optimal and the trachea healed. Virtual planning of the tracheostomy did not correlate with actual cannula position. Identifying patients with abnormal airway anatomy in whom commercially available cannula cannot be optimally positioned is advantageous. Patient-specific cannula design based on 3D virtualization of the airway was beneficial in a patient with abnormal airway anatomy.

  13. 3D-SURFER 2.0: web platform for real-time search and characterization of protein surfaces.

    Science.gov (United States)

    Xiong, Yi; Esquivel-Rodriguez, Juan; Sael, Lee; Kihara, Daisuke

    2014-01-01

    The increasing number of uncharacterized protein structures necessitates the development of computational approaches for function annotation using the protein tertiary structures. Protein structure database search is the basis of any structure-based functional elucidation of proteins. 3D-SURFER is a web platform for real-time protein surface comparison of a given protein structure against the entire PDB using 3D Zernike descriptors. It can smoothly navigate the protein structure space in real-time from one query structure to another. A major new feature of Release 2.0 is the ability to compare the protein surface of a single chain, a single domain, or a single complex against databases of protein chains, domains, complexes, or a combination of all three in the latest PDB. Additionally, two types of protein structures can now be compared: all-atom-surface and backbone-atom-surface. The server can also accept a batch job for a large number of database searches. Pockets in protein surfaces can be identified by VisGrid and LIGSITE (csc) . The server is available at http://kiharalab.org/3d-surfer/.

  14. GE3D: A Virtual Campus for Technology-Enhanced Distance Learning

    Directory of Open Access Journals (Sweden)

    Jean Grieu

    2010-09-01

    Full Text Available A lot of learning systems platforms are used all over the world. But these conventional E-learning platforms aim at students who are used to work on their own. Our students are young (19 years old – 22 years old, and in their first year at the university. Following extensive interviews with our students, we have designed GE3D, an E-learning platform, according to their expectations and our criteria. In this paper, we describe the students’ demands, resulting from the interviews. Then, we describe our virtual campus. Even if our platform uses some elements coming from the 3D games world, it is always a pedagogical tool. Using this technology, we developed a 3D representation of the real world. GE3D is a multi-users tool, with a synchronous technology, an intuitive interface for end-users and an embedded Intelligent Tutoring System to support learners. We also describe the process of a lecture on the Programmable Logic Controllers (PLC’s in this new universe.

  15. Realistic terrain visualization based on 3D virtual world technology

    Science.gov (United States)

    Huang, Fengru; Lin, Hui; Chen, Bin; Xiao, Cai

    2010-11-01

    The rapid advances in information technologies, e.g., network, graphics processing, and virtual world, have provided challenges and opportunities for new capabilities in information systems, Internet applications, and virtual geographic environments, especially geographic visualization and collaboration. In order to achieve meaningful geographic capabilities, we need to explore and understand how these technologies can be used to construct virtual geographic environments to help to engage geographic research. The generation of three-dimensional (3D) terrain plays an important part in geographical visualization, computer simulation, and virtual geographic environment applications. The paper introduces concepts and technologies of virtual worlds and virtual geographic environments, explores integration of realistic terrain and other geographic objects and phenomena of natural geographic environment based on SL/OpenSim virtual world technologies. Realistic 3D terrain visualization is a foundation of construction of a mirror world or a sand box model of the earth landscape and geographic environment. The capabilities of interaction and collaboration on geographic information are discussed as well. Further virtual geographic applications can be developed based on the foundation work of realistic terrain visualization in virtual environments.

  16. High-accuracy and real-time 3D positioning, tracking system for medical imaging applications based on 3D digital image correlation

    Science.gov (United States)

    Xue, Yuan; Cheng, Teng; Xu, Xiaohai; Gao, Zeren; Li, Qianqian; Liu, Xiaojing; Wang, Xing; Song, Rui; Ju, Xiangyang; Zhang, Qingchuan

    2017-01-01

    This paper presents a system for positioning markers and tracking the pose of a rigid object with 6 degrees of freedom in real-time using 3D digital image correlation, with two examples for medical imaging applications. Traditional DIC method was improved to meet the requirements of the real-time by simplifying the computations of integral pixel search. Experiments were carried out and the results indicated that the new method improved the computational efficiency by about 4-10 times in comparison with the traditional DIC method. The system was aimed for orthognathic surgery navigation in order to track the maxilla segment after LeFort I osteotomy. Experiments showed noise for the static point was at the level of 10-3 mm and the measurement accuracy was 0.009 mm. The system was demonstrated on skin surface shape evaluation of a hand for finger stretching exercises, which indicated a great potential on tracking muscle and skin movements.

  17. Intelligent web agents for a 3D virtual community

    Science.gov (United States)

    Dave, T. M.; Zhang, Yanqing; Owen, G. S. S.; Sunderraman, Rajshekhar

    2003-08-01

    In this paper, we propose an Avatar-based intelligent agent technique for 3D Web based Virtual Communities based on distributed artificial intelligence, intelligent agent techniques, and databases and knowledge bases in a digital library. One of the goals of this joint NSF (IIS-9980130) and ACM SIGGRAPH Education Committee (ASEC) project is to create a virtual community of educators and students who have a common interest in comptuer graphics, visualization, and interactive techniqeus. In this virtual community (ASEC World) Avatars will represent the educators, students, and other visitors to the world. Intelligent agents represented as specially dressed Avatars will be available to assist the visitors to ASEC World. The basic Web client-server architecture of the intelligent knowledge-based avatars is given. Importantly, the intelligent Web agent software system for the 3D virtual community is implemented successfully.

  18. Virtual reality 3D headset based on DMD light modulators

    Energy Technology Data Exchange (ETDEWEB)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-13

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micro-mirror devices (DMD). Our approach leverages silicon micro mirrors offering 720p resolution displays in a small form-factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high resolution and low power consumption. Applications include night driving, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design is described in which light from the DMD is imaged to infinity and the user’s own eye lens forms a real image on the user’s retina.

  19. A STUDY ON USING 3D VISUALIZATION AND SIMULATION PROGRAM (OPTITEX 3D ON LEATHER APPAREL

    Directory of Open Access Journals (Sweden)

    Ork Nilay

    2016-05-01

    Full Text Available Leather is a luxury garment. Design, material, labor, fitting and time costs are very effective on the production cost of the consumer leather good. 3D visualization and simulation programs which are getting popular in textile industry can be used for material, labor and time saving in leather apparel. However these programs have a very limited use in leather industry because leather material databases are not sufficient as in textile industry. In this research, firstly material properties of leather and textile fabric were determined by using both textile and leather physical test methods, and interpreted and introduced in the program. Detailed measures of an experimental human body were measured from a 3D body scanner. An avatar was designed according to these measurements. Then a prototype dress was made by using Computer Aided Design-CAD program for designing the patterns. After the pattern making, OptiTex 3D visualization and simulation program was used to visualize and simulate the dresses. Additionally the leather and cotton fabric dresses were sewn in real life. Then the visual and real life dresses were compared and discussed. 3D virtual prototyping seems a promising potential in future manufacturing technologies by evaluating the fitting of garments in a simple and quick way, filling the gap between 3D pattern design and manufacturing, providing virtual demonstrations to customers.

  20. X3DOM AS CARRIER OF THE VIRTUAL HERITAGE

    Directory of Open Access Journals (Sweden)

    Y. Jung

    2012-09-01

    Full Text Available Virtual Museums (VM are a new model of communication that aims at creating a personalized, immersive, and interactive way to enhance our understanding of the world around us. The term "VM" is a short-cut that comprehends various types of digital creations. One of the carriers for the communication of the virtual heritage at future internet level as de-facto standard is browser front-ends presenting the content and assets of museums. A major driving technology for the documentation and presentation of heritage driven media is real-time 3D content, thus imposing new strategies for a web inclusion. 3D content must become a first class web media that can be created, modified, and shared in the same way as text, images, audio and video are handled on the web right now. A new integration model based on a DOM integration into the web browsers' architecture opens up new possibilities for declarative 3 D content on the web and paves the way for new application scenarios for the virtual heritage at future internet level. With special regards to the X3DOM project as enabling technology for declarative 3D in HTML, this paper describes application scenarios and analyses its technological requirements for an efficient presentation and manipulation of virtual heritage assets on the web.

  1. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    Science.gov (United States)

    Qi, Jin; Yang, Zhiyong

    2014-01-01

    Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D) videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  2. Learning dictionaries of sparse codes of 3D movements of body joints for real-time human activity understanding.

    Directory of Open Access Journals (Sweden)

    Jin Qi

    Full Text Available Real-time human activity recognition is essential for human-robot interactions for assisted healthy independent living. Most previous work in this area is performed on traditional two-dimensional (2D videos and both global and local methods have been used. Since 2D videos are sensitive to changes of lighting condition, view angle, and scale, researchers begun to explore applications of 3D information in human activity understanding in recently years. Unfortunately, features that work well on 2D videos usually don't perform well on 3D videos and there is no consensus on what 3D features should be used. Here we propose a model of human activity recognition based on 3D movements of body joints. Our method has three steps, learning dictionaries of sparse codes of 3D movements of joints, sparse coding, and classification. In the first step, space-time volumes of 3D movements of body joints are obtained via dense sampling and independent component analysis is then performed to construct a dictionary of sparse codes for each activity. In the second step, the space-time volumes are projected to the dictionaries and a set of sparse histograms of the projection coefficients are constructed as feature representations of the activities. Finally, the sparse histograms are used as inputs to a support vector machine to recognize human activities. We tested this model on three databases of human activities and found that it outperforms the state-of-the-art algorithms. Thus, this model can be used for real-time human activity recognition in many applications.

  3. Embryonic staging using a 3D virtual reality system

    NARCIS (Netherlands)

    C.M. Verwoerd-Dikkeboom (Christine); A.H.J. Koning (Anton); P.J. van der Spek (Peter); N. Exalto (Niek); R.P.M. Steegers-Theunissen (Régine)

    2008-01-01

    textabstractBACKGROUND: The aim of this study was to demonstrate that Carnegie Stages could be assigned to embryos visualized with a 3D virtual reality system. METHODS: We analysed 48 3D ultrasound scans of 19 IVF/ICSI pregnancies at 7-10 weeks' gestation. These datasets were visualized as 3D

  4. 3D Virtual Reality for Teaching Astronomy

    Science.gov (United States)

    Speck, Angela; Ruzhitskaya, L.; Laffey, J.; Ding, N.

    2012-01-01

    We are developing 3D virtual learning environments (VLEs) as learning materials for an undergraduate astronomy course, in which will utilize advances both in technologies available and in our understanding of the social nature of learning. These learning materials will be used to test whether such VLEs can indeed augment science learning so that it is more engaging, active, visual and effective. Our project focuses on the challenges and requirements of introductory college astronomy classes. Here we present our virtual world of the Jupiter system and how we plan to implement it to allow students to learn course material - physical laws and concepts in astronomy - while engaging them into exploration of the Jupiter's system, encouraging their imagination, curiosity, and motivation. The VLE can allow students to work individually or collaboratively. The 3D world also provides an opportunity for research in astronomy education to investigate impact of social interaction, gaming features, and use of manipulatives offered by a learning tool on students’ motivation and learning outcomes. Use of this VLE is also a valuable source for exploration of how the learners’ spatial awareness can be enhanced by working in 3D environment. We will present the Jupiter-system environment along with a preliminary study of the efficacy and usability of our Jupiter 3D VLE.

  5. A Bayesian approach to real-time 3D tumor localization via monoscopic x-ray imaging during treatment delivery

    International Nuclear Information System (INIS)

    Li, Ruijiang; Fahimian, Benjamin P.; Xing, Lei

    2011-01-01

    Purpose: Monoscopic x-ray imaging with on-board kV devices is an attractive approach for real-time image guidance in modern radiation therapy such as VMAT or IMRT, but it falls short in providing reliable information along the direction of imaging x-ray. By effectively taking consideration of projection data at prior times and/or angles through a Bayesian formalism, the authors develop an algorithm for real-time and full 3D tumor localization with a single x-ray imager during treatment delivery. Methods: First, a prior probability density function is constructed using the 2D tumor locations on the projection images acquired during patient setup. Whenever an x-ray image is acquired during the treatment delivery, the corresponding 2D tumor location on the imager is used to update the likelihood function. The unresolved third dimension is obtained by maximizing the posterior probability distribution. The algorithm can also be used in a retrospective fashion when all the projection images during the treatment delivery are used for 3D localization purposes. The algorithm does not involve complex optimization of any model parameter and therefore can be used in a ''plug-and-play'' fashion. The authors validated the algorithm using (1) simulated 3D linear and elliptic motion and (2) 3D tumor motion trajectories of a lung and a pancreas patient reproduced by a physical phantom. Continuous kV images were acquired over a full gantry rotation with the Varian TrueBeam on-board imaging system. Three scenarios were considered: fluoroscopic setup, cone beam CT setup, and retrospective analysis. Results: For the simulation study, the RMS 3D localization error is 1.2 and 2.4 mm for the linear and elliptic motions, respectively. For the phantom experiments, the 3D localization error is < 1 mm on average and < 1.5 mm at 95th percentile in the lung and pancreas cases for all three scenarios. The difference in 3D localization error for different scenarios is small and is not

  6. Web-based three-dimensional Virtual Body Structures: W3D-VBS.

    Science.gov (United States)

    Temkin, Bharti; Acosta, Eric; Hatfield, Paul; Onal, Erhan; Tong, Alex

    2002-01-01

    Major efforts are being made to improve the teaching of human anatomy to foster cognition of visuospatial relationships. The Visible Human Project of the National Library of Medicine makes it possible to create virtual reality-based applications for teaching anatomy. Integration of traditional cadaver and illustration-based methods with Internet-based simulations brings us closer to this goal. Web-based three-dimensional Virtual Body Structures (W3D-VBS) is a next-generation immersive anatomical training system for teaching human anatomy over the Internet. It uses Visible Human data to dynamically explore, select, extract, visualize, manipulate, and stereoscopically palpate realistic virtual body structures with a haptic device. Tracking user's progress through evaluation tools helps customize lesson plans. A self-guided "virtual tour" of the whole body allows investigation of labeled virtual dissections repetitively, at any time and place a user requires it.

  7. Optimal transcostal high-intensity focused ultrasound with combined real-time 3D movement tracking and correction

    International Nuclear Information System (INIS)

    Marquet, F; Aubry, J F; Pernot, M; Fink, M; Tanter, M

    2011-01-01

    Recent studies have demonstrated the feasibility of transcostal high intensity focused ultrasound (HIFU) treatment in liver. However, two factors limit thermal necrosis of the liver through the ribs: the energy deposition at focus is decreased by the respiratory movement of the liver and the energy deposition on the skin is increased by the presence of highly absorbing bone structures. Ex vivo ablations were conducted to validate the feasibility of a transcostal real-time 3D movement tracking and correction mode. Experiments were conducted through a chest phantom made of three human ribs immersed in water and were placed in front of a 300 element array working at 1 MHz. A binarized apodization law introduced recently in order to spare the rib cage during treatment has been extended here with real-time electronic steering of the beam. Thermal simulations have been conducted to determine the steering limits. In vivo 3D-movement detection was performed on pigs using an ultrasonic sequence. The maximum error on the transcostal motion detection was measured to be 0.09 ± 0.097 mm on the anterior–posterior axis. Finally, a complete sequence was developed combining real-time 3D transcostal movement correction and spiral trajectory of the HIFU beam, allowing the system to treat larger areas with optimized efficiency. Lesions as large as 1 cm in diameter have been produced at focus in excised liver, whereas no necroses could be obtained with the same emitted power without correcting the movement of the tissue sample.

  8. Objective and subjective quality assessment of geometry compression of reconstructed 3D humans in a 3D virtual room

    Science.gov (United States)

    Mekuria, Rufael; Cesar, Pablo; Doumanis, Ioannis; Frisiello, Antonella

    2015-09-01

    Compression of 3D object based video is relevant for 3D Immersive applications. Nevertheless, the perceptual aspects of the degradation introduced by codecs for meshes and point clouds are not well understood. In this paper we evaluate the subjective and objective degradations introduced by such codecs in a state of art 3D immersive virtual room. In the 3D immersive virtual room, users are captured with multiple cameras, and their surfaces are reconstructed as photorealistic colored/textured 3D meshes or point clouds. To test the perceptual effect of compression and transmission, we render degraded versions with different frame rates in different contexts (near/far) in the scene. A quantitative subjective study with 16 users shows that negligible distortion of decoded surfaces compared to the original reconstructions can be achieved in the 3D virtual room. In addition, a qualitative task based analysis in a full prototype field trial shows increased presence, emotion, user and state recognition of the reconstructed 3D Human representation compared to animated computer avatars.

  9. Assessing 3D Virtual World Disaster Training Through Adult Learning Theory

    Directory of Open Access Journals (Sweden)

    Lee Taylor-Nelms

    2014-10-01

    Full Text Available As role-play, virtual reality, and simulated environments gain popularity through virtual worlds such as Second Life, the importance of identifying best practices for education and emergency management training becomes necessary. Using a formal needs assessment approach, we examined the extent to which 3D virtual tornado simulation trainings follow the principles of adult learning theory employed by the Federal Emergency Management Agency's (FEMA National Training and Education Division. Through a three-fold methodology of observation, interviews, and reflection on action, 3D virtual world tornado trainings were analyzed for congruence to adult learning theory.

  10. Design of a 3D virtual geographic interface for access to geoinformatin in real time

    DEFF Research Database (Denmark)

    Bodum, Lars

    2004-01-01

    as VR Media Lab. The Centre for 3D GeoInformation was opened in 2001 and the main purpose of this facility is to extrude the region from 2D to 3D. Through the means of traditional geoinformation such as building footprints, geocoding, building and dwelling register and a DTM the region will be build...

  11. Interactive Mapping on Virtual Terrain Models Using RIMS (Real-time, Interactive Mapping System)

    Science.gov (United States)

    Bernardin, T.; Cowgill, E.; Gold, R. D.; Hamann, B.; Kreylos, O.; Schmitt, A.

    2006-12-01

    Recent and ongoing space missions are yielding new multispectral data for the surfaces of Earth and other planets at unprecedented rates and spatial resolution. With their high spatial resolution and widespread coverage, these data have opened new frontiers in observational Earth and planetary science. But they have also precipitated an acute need for new analytical techniques. To address this problem, we have developed RIMS, a Real-time, Interactive Mapping System that allows scientists to visualize, interact with, and map directly on, three-dimensional (3D) displays of georeferenced texture data, such as multispectral satellite imagery, that is draped over a surface representation derived from digital elevation data. The system uses a quadtree-based multiresolution method to render in real time high-resolution (3 to 10 m/pixel) data over large (800 km by 800 km) spatial areas. It allows users to map inside this interactive environment by generating georeferenced and attributed vector-based elements that are draped over the topography. We explain the technique using 15 m ASTER stereo-data from Iraq, P.R. China, and other remote locations because our particular motivation is to develop a technique that permits the detailed (10 m to 1000 m) neotectonic mapping over large (100 km to 1000 km long) active fault systems that is needed to better understand active continental deformation on Earth. RIMS also includes a virtual geologic compass that allows users to fit a plane to geologic surfaces and thereby measure their orientations. It also includes tools that allow 3D surface reconstruction of deformed and partially eroded surfaces such as folded bedding planes. These georeferenced map and measurement data can be exported to, or imported from, a standard GIS (geographic information systems) file format. Our interactive, 3D visualization and analysis system is designed for those who study planetary surfaces, including neotectonic geologists, geomorphologists, marine

  12. Immersive Learning Environment Using 3D Virtual Worlds and Integrated Remote Experimentation

    Directory of Open Access Journals (Sweden)

    Roderval Marcelino

    2013-01-01

    Full Text Available This project seeks to demonstrate the use of remote experimentation and 3D virtual environments applied to the teaching-learning in the areas of exact sciences-physics. In proposing the combination of remote experimentation and 3D virtual worlds in teaching-learning process, we intend to achieve greater geographic coverage, contributing to the construction of new methodologies of teaching support, speed of access and foremost motivation for students to continue in scientific study of the technology areas. The proposed architecture is based on a model implemented fully featured open source and open hardware. The virtual world was built in OpenSim software and integrated it a remote physics experiment called "electrical panel". Accessing the virtual world the user has total control of the experiment within the 3D virtual world.

  13. Real-time recording and classification of eye movements in an immersive virtual environment.

    Science.gov (United States)

    Diaz, Gabriel; Cooper, Joseph; Kit, Dmitry; Hayhoe, Mary

    2013-10-10

    Despite the growing popularity of virtual reality environments, few laboratories are equipped to investigate eye movements within these environments. This primer is intended to reduce the time and effort required to incorporate eye-tracking equipment into a virtual reality environment. We discuss issues related to the initial startup and provide algorithms necessary for basic analysis. Algorithms are provided for the calculation of gaze angle within a virtual world using a monocular eye-tracker in a three-dimensional environment. In addition, we provide algorithms for the calculation of the angular distance between the gaze and a relevant virtual object and for the identification of fixations, saccades, and pursuit eye movements. Finally, we provide tools that temporally synchronize gaze data and the visual stimulus and enable real-time assembly of a video-based record of the experiment using the Quicktime MOV format, available at http://sourceforge.net/p/utdvrlibraries/. This record contains the visual stimulus, the gaze cursor, and associated numerical data and can be used for data exportation, visual inspection, and validation of calculated gaze movements.

  14. A second life for eHealth: prospects for the use of 3-D virtual worlds in clinical psychology.

    Science.gov (United States)

    Gorini, Alessandra; Gaggioli, Andrea; Vigna, Cinzia; Riva, Giuseppe

    2008-08-05

    The aim of the present paper is to describe the role played by three-dimensional (3-D) virtual worlds in eHealth applications, addressing some potential advantages and issues related to the use of this emerging medium in clinical practice. Due to the enormous diffusion of the World Wide Web (WWW), telepsychology, and telehealth in general, have become accepted and validated methods for the treatment of many different health care concerns. The introduction of the Web 2.0 has facilitated the development of new forms of collaborative interaction between multiple users based on 3-D virtual worlds. This paper describes the development and implementation of a form of tailored immersive e-therapy called p-health whose key factor is interreality, that is, the creation of a hybrid augmented experience merging physical and virtual worlds. We suggest that compared with conventional telehealth applications such as emails, chat, and videoconferences, the interaction between real and 3-D virtual worlds may convey greater feelings of presence, facilitate the clinical communication process, positively influence group processes and cohesiveness in group-based therapies, and foster higher levels of interpersonal trust between therapists and patients. However, challenges related to the potentially addictive nature of such virtual worlds and questions related to privacy and personal safety will also be discussed.

  15. 3D natural emulation design approach to virtual communities

    OpenAIRE

    DiPaola, Steve

    2010-01-01

    The design goal for OnLive’s Internet-based Virtual Community system was to develop avatars and virtual communities where the participants sense a tele-presence – that they are really there in the virtual space with other people. This collective sense of "being-there" does not happen over the phone or with teleconferencing; it is a new and emerging phenomenon, unique to 3D virtual communities. While this group presence paradigm is a simple idea, the design and technical issues needed to begin...

  16. The Value of 3D Printing Models of Left Atrial Appendage Using Real-Time 3D Transesophageal Echocardiographic Data in Left Atrial Appendage Occlusion: Applications toward an Era of Truly Personalized Medicine.

    Science.gov (United States)

    Liu, Peng; Liu, Rijing; Zhang, Yan; Liu, Yingfeng; Tang, Xiaoming; Cheng, Yanzhen

    The objective of this study was to assess the clinical feasibility of generating 3D printing models of left atrial appendage (LAA) using real-time 3D transesophageal echocardiogram (TEE) data for preoperative reference of LAA occlusion. Percutaneous LAA occlusion can effectively prevent patients with atrial fibrillation from stroke. However, the anatomical structure of LAA is so complicated that adequate information of its structure is essential for successful LAA occlusion. Emerging 3D printing technology has the demonstrated potential to structure more accurately than conventional imaging modalities by creating tangible patient-specific models. Typically, 3D printing data sets are acquired from CT and MRI, which may involve intravenous contrast, sedation, and ionizing radiation. It has been reported that 3D models of LAA were successfully created by the data acquired from CT. However, 3D printing of the LAA using real-time 3D TEE data has not yet been explored. Acquisition of 3D transesophageal echocardiographic data from 8 patients with atrial fibrillation was performed using the Philips EPIQ7 ultrasound system. Raw echocardiographic image data were opened in Philips QLAB and converted to 'Cartesian DICOM' format and imported into Mimics® software to create 3D models of LAA, which were printed using a rubber-like material. The printed 3D models were then used for preoperative reference and procedural simulation in LAA occlusion. We successfully printed LAAs of 8 patients. Each LAA costs approximately CNY 800-1,000 and the total process takes 16-17 h. Seven of the 8 Watchman devices predicted by preprocedural 2D TEE images were of the same sizes as those placed in the real operation. Interestingly, 3D printing models were highly reflective of the shape and size of LAAs, and all device sizes predicted by the 3D printing model were fully consistent with those placed in the real operation. Also, the 3D printed model could predict operating difficulty and the

  17. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Directory of Open Access Journals (Sweden)

    Vasanthan Maruthapillai

    Full Text Available In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face and change in marker distance (change in distance between the original and new marker positions, were used to extract three statistical features (mean, variance, and root mean square from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  18. Optimal Geometrical Set for Automated Marker Placement to Virtualized Real-Time Facial Emotions.

    Science.gov (United States)

    Maruthapillai, Vasanthan; Murugappan, Murugappan

    2016-01-01

    In recent years, real-time face recognition has been a major topic of interest in developing intelligent human-machine interaction systems. Over the past several decades, researchers have proposed different algorithms for facial expression recognition, but there has been little focus on detection in real-time scenarios. The present work proposes a new algorithmic method of automated marker placement used to classify six facial expressions: happiness, sadness, anger, fear, disgust, and surprise. Emotional facial expressions were captured using a webcam, while the proposed algorithm placed a set of eight virtual markers on each subject's face. Facial feature extraction methods, including marker distance (distance between each marker to the center of the face) and change in marker distance (change in distance between the original and new marker positions), were used to extract three statistical features (mean, variance, and root mean square) from the real-time video sequence. The initial position of each marker was subjected to the optical flow algorithm for marker tracking with each emotional facial expression. Finally, the extracted statistical features were mapped into corresponding emotional facial expressions using two simple non-linear classifiers, K-nearest neighbor and probabilistic neural network. The results indicate that the proposed automated marker placement algorithm effectively placed eight virtual markers on each subject's face and gave a maximum mean emotion classification rate of 96.94% using the probabilistic neural network.

  19. Review of Real-Time 3-Dimensional Image Guided Radiation Therapy on Standard-Equipped Cancer Radiation Therapy Systems: Are We at the Tipping Point for the Era of Real-Time Radiation Therapy?

    Science.gov (United States)

    Keall, Paul J; Nguyen, Doan Trang; O'Brien, Ricky; Zhang, Pengpeng; Happersett, Laura; Bertholet, Jenny; Poulsen, Per R

    2018-04-14

    To review real-time 3-dimensional (3D) image guided radiation therapy (IGRT) on standard-equipped cancer radiation therapy systems, focusing on clinically implemented solutions. Three groups in 3 continents have clinically implemented novel real-time 3D IGRT solutions on standard-equipped linear accelerators. These technologies encompass kilovoltage, combined megavoltage-kilovoltage, and combined kilovoltage-optical imaging. The cancer sites treated span pelvic and abdominal tumors for which respiratory motion is present. For each method the 3D-measured motion during treatment is reported. After treatment, dose reconstruction was used to assess the treatment quality in the presence of motion with and without real-time 3D IGRT. The geometric accuracy was quantified through phantom experiments. A literature search was conducted to identify additional real-time 3D IGRT methods that could be clinically implemented in the near future. The real-time 3D IGRT methods were successfully clinically implemented and have been used to treat more than 200 patients. Systematic target position shifts were observed using all 3 methods. Dose reconstruction demonstrated that the delivered dose is closer to the planned dose with real-time 3D IGRT than without real-time 3D IGRT. In addition, compromised target dose coverage and variable normal tissue doses were found without real-time 3D IGRT. The geometric accuracy results with real-time 3D IGRT had a mean error of real-time 3D IGRT methods using standard-equipped radiation therapy systems that could also be clinically implemented. Multiple clinical implementations of real-time 3D IGRT on standard-equipped cancer radiation therapy systems have been demonstrated. Many more approaches that could be implemented were identified. These solutions provide a pathway for the broader adoption of methods to make radiation therapy more accurate, impacting tumor and normal tissue dose, margins, and ultimately patient outcomes. Copyright © 2018

  20. Real time 3D photometry

    Science.gov (United States)

    Fernandez-Balbuena, A. A.; Vazquez-Molini, D.; García-Botella, A.; Romo, J.; Serrano, Ana

    2017-09-01

    The photometry and radiometry measurement is a well-developed field. The necessity of measuring optical systems performance involves the use of several techniques like Gonio-photometry. The Gonio photometers are a precise measurement tool that is used in the lighting area like office, luminaire head car lighting, concentrator /collimator measurement and all the designed and fabricated optical systems that works with light. There is one disadvantage in this kind of measurements that obtain the intensity polar curves and the total flux of the optical system. In the industry, there are good Gonio photometers that are precise and reliable but they are very expensive and the measurement time is long. In industry the cost can be of minor importance but measuring time that is around 30 minutes is of major importance due to trained staff cost. We have designed a system to measure photometry in real time; it consists in a curved screen to get a huge measurement angle and a CCD. The system to be measured projects light onto the screen and the CCD records a video of the screen obtaining an image of the projected profile. A complex calibration permits to trace screen data (x,y,z) to intensity polar curve (I,αγ). This intensity is obtained in candels (cd) with an image + processing time below one second.

  1. Synchronized 2D/3D optical mapping for interactive exploration and real-time visualization of multi-function neurological images.

    Science.gov (United States)

    Zhang, Qi; Alexander, Murray; Ryner, Lawrence

    2013-01-01

    Efficient software with the ability to display multiple neurological image datasets simultaneously with full real-time interactivity is critical for brain disease diagnosis and image-guided planning. In this paper, we describe the creation and function of a new comprehensive software platform that integrates novel algorithms and functions for multiple medical image visualization, processing, and manipulation. We implement an opacity-adjustment algorithm to build 2D lookup tables for multiple slice image display and fusion, which achieves a better visual result than those of using VTK-based methods. We also develop a new real-time 2D and 3D data synchronization scheme for multi-function MR volume and slice image optical mapping and rendering simultaneously through using the same adjustment operation. All these methodologies are integrated into our software framework to provide users with an efficient tool for flexibly, intuitively, and rapidly exploring and analyzing the functional and anatomical MR neurological data. Finally, we validate our new techniques and software platform with visual analysis and task-specific user studies. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. PAST AND FUTURE APPLICATIONS OF 3-D (VIRTUAL REALITY TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Nigel Foreman

    2014-11-01

    Full Text Available Virtual Reality (virtual environment technology, VET has been widely available for twenty years. In that time, the benefits of using virtual environments (VEs have become clear in many areas of application, including assessment and training, education, rehabilitation and psychological research in spatial cognition. The flexibility, reproducibility and adaptability of VEs are especially important, particularly in the training and testing of navigational and way-finding skills. Transfer of training between real and virtual environments has been found to be reliable. However, input device usage can compromise spatial information acquisition from VEs, and distances in VEs are invariably underestimated. The present review traces the evolution of VET, anticipates future areas in which developments are likely to occur, and highlights areas in which research is needed to optimise usage.

  3. NASA's "Eyes On The Solar System:" A Real-time, 3D-Interactive Tool to Teach the Wonder of Planetary Science

    Science.gov (United States)

    Hussey, K.

    2014-12-01

    NASA's Jet Propulsion Laboratory is using video game technology to immerse students, the general public and mission personnel in our solar system and beyond. "Eyes on the Solar System," a cross-platform, real-time, 3D-interactive application that can run on-line or as a stand-alone "video game," is of particular interest to educators looking for inviting tools to capture students interest in a format they like and understand. (eyes.nasa.gov). It gives users an extraordinary view of our solar system by virtually transporting them across space and time to make first-person observations of spacecraft, planetary bodies and NASA/ESA missions in action. Key scientific results illustrated with video presentations, supporting imagery and web links are imbedded contextually into the solar system. Educators who want an interactive, game-based approach to engage students in learning Planetary Science will see how "Eyes" can be effectively used to teach its principles to grades 3 through 14.The presentation will include a detailed demonstration of the software along with a description/demonstration of how this technology is being adapted for education. There will also be a preview of coming attractions. This work is being conducted by the Visualization Technology Applications and Development Group at NASA's Jet Propulsion Laboratory, the same team responsible for "Eyes on the Earth 3D," and "Eyes on Exoplanets," which can be viewed at eyes.nasa.gov/earth and eyes.nasa.gov/exoplanets.

  4. Virtual cardiotomy based on 3-D MRI for preoperative planning in congenital heart disease

    International Nuclear Information System (INIS)

    Soerensen, Thomas Sangild; Beerbaum, Philipp; Razavi, Reza; Greil, Gerald Franz; Mosegaard, Jesper; Rasmusson, Allan; Schaeffter, Tobias; Austin, Conal

    2008-01-01

    Patient-specific preoperative planning in complex congenital heart disease may be greatly facilitated by virtual cardiotomy. Surgeons can perform an unlimited number of surgical incisions on a virtual 3-D reconstruction to evaluate the feasibility of different surgical strategies. To quantitatively evaluate the quality of the underlying imaging data and the accuracy of the corresponding segmentation, and to qualitatively evaluate the feasibility of virtual cardiotomy. A whole-heart MRI sequence was applied in 42 children with congenital heart disease (age 3±3 years, weight 13±9 kg, heart rate 96± 21 bpm). Image quality was graded 1-4 (diagnostic image quality ≥2) by two independent blinded observers. In patients with diagnostic image quality the segmentation quality was also graded 1-4 (4 no discrepancies, 1 misleading error). The average image quality score was 2.7 - sufficient for virtual reconstruction in 35 of 38 patients (92%) older than 1 month. Segmentation time was 59±10 min (average quality score 3.5). Virtual cardiotomy was performed in 19 patients. Accurate virtual reconstructions of patient-specific cardiac anatomy can be produced in less than 1 h from 3-D MRI. The presented work thus introduces a new, clinically feasible noninvasive technique for improved preoperative planning in complex cases of congenital heart disease. (orig.)

  5. Seamless 3D interaction for virtual tables, projection planes, and CAVEs

    Science.gov (United States)

    Encarnacao, L. M.; Bimber, Oliver; Schmalstieg, Dieter; Barton, Robert J., III

    2000-08-01

    The Virtual Table presents stereoscopic graphics to a user in a workbench-like setting. This device shares with other large- screen display technologies (such as data walls and surround- screen projection systems) the lack of human-centered unencumbered user interfaces and 3D interaction technologies. Such shortcomings present severe limitations to the application of virtual reality (VR) technology to time- critical applications as well as employment scenarios that involve heterogeneous groups of end-users without high levels of computer familiarity and expertise. Traditionally such employment scenarios are common in planning-related application areas such as mission rehearsal and command and control. For these applications, a high grade of flexibility with respect to the system requirements (display and I/O devices) as well as to the ability to seamlessly and intuitively switch between different interaction modalities and interaction are sought. Conventional VR techniques may be insufficient to meet this challenge. This paper presents novel approaches for human-centered interfaces to Virtual Environments focusing on the Virtual Table visual input device. It introduces new paradigms for 3D interaction in virtual environments (VE) for a variety of application areas based on pen-and-clipboard, mirror-in-hand, and magic-lens metaphors, and introduces new concepts for combining VR and augmented reality (AR) techniques. It finally describes approaches toward hybrid and distributed multi-user interaction environments and concludes by hypothesizing on possible use cases for defense applications.

  6. GPU acceleration towards real-time image reconstruction in 3D tomographic diffractive microscopy

    Science.gov (United States)

    Bailleul, J.; Simon, B.; Debailleul, M.; Liu, H.; Haeberlé, O.

    2012-06-01

    Phase microscopy techniques regained interest in allowing for the observation of unprepared specimens with excellent temporal resolution. Tomographic diffractive microscopy is an extension of holographic microscopy which permits 3D observations with a finer resolution than incoherent light microscopes. Specimens are imaged by a series of 2D holograms: their accumulation progressively fills the range of frequencies of the specimen in Fourier space. A 3D inverse FFT eventually provides a spatial image of the specimen. Consequently, acquisition then reconstruction are mandatory to produce an image that could prelude real-time control of the observed specimen. The MIPS Laboratory has built a tomographic diffractive microscope with an unsurpassed 130nm resolution but a low imaging speed - no less than one minute. Afterwards, a high-end PC reconstructs the 3D image in 20 seconds. We now expect an interactive system providing preview images during the acquisition for monitoring purposes. We first present a prototype implementing this solution on CPU: acquisition and reconstruction are tied in a producer-consumer scheme, sharing common data into CPU memory. Then we present a prototype dispatching some reconstruction tasks to GPU in order to take advantage of SIMDparallelization for FFT and higher bandwidth for filtering operations. The CPU scheme takes 6 seconds for a 3D image update while the GPU scheme can go down to 2 or > 1 seconds depending on the GPU class. This opens opportunities for 4D imaging of living organisms or crystallization processes. We also consider the relevance of GPU for 3D image interaction in our specific conditions.

  7. Synthetic biology's tall order: Reconstruction of 3D, super resolution images of single molecules in real-time

    CSIR Research Space (South Africa)

    Henriques, R

    2010-08-31

    Full Text Available -to-use reconstruction software coupled with image acquisition. Here, we present QuickPALM, an Image plugin, enabling real-time reconstruction of 3D super-resolution images during acquisition and drift correction. We illustrate its application by reconstructing Cy5...

  8. Game-Like Language Learning in 3-D Virtual Environments

    Science.gov (United States)

    Berns, Anke; Gonzalez-Pardo, Antonio; Camacho, David

    2013-01-01

    This paper presents our recent experiences with the design of game-like applications in 3-D virtual environments as well as its impact on student motivation and learning. Therefore our paper starts with a brief analysis of the motivational aspects of videogames and virtual worlds (VWs). We then go on to explore the possible benefits of both in the…

  9. Quantitative 3-D imaging topogrammetry for telemedicine applications

    Science.gov (United States)

    Altschuler, Bruce R.

    1994-01-01

    The technology to reliably transmit high-resolution visual imagery over short to medium distances in real time has led to the serious considerations of the use of telemedicine, telepresence, and telerobotics in the delivery of health care. These concepts may involve, and evolve toward: consultation from remote expert teaching centers; diagnosis; triage; real-time remote advice to the surgeon; and real-time remote surgical instrument manipulation (telerobotics with virtual reality). Further extrapolation leads to teledesign and telereplication of spare surgical parts through quantitative teleimaging of 3-D surfaces tied to CAD/CAM devices and an artificially intelligent archival data base of 'normal' shapes. The ability to generate 'topogrames' or 3-D surface numerical tables of coordinate values capable of creating computer-generated virtual holographic-like displays, machine part replication, and statistical diagnostic shape assessment is critical to the progression of telemedicine. Any virtual reality simulation will remain in 'video-game' realm until realistic dimensional and spatial relational inputs from real measurements in vivo during surgeries are added to an ever-growing statistical data archive. The challenges of managing and interpreting this 3-D data base, which would include radiographic and surface quantitative data, are considerable. As technology drives toward dynamic and continuous 3-D surface measurements, presenting millions of X, Y, Z data points per second of flexing, stretching, moving human organs, the knowledge base and interpretive capabilities of 'brilliant robots' to work as a surgeon's tireless assistants becomes imaginable. The brilliant robot would 'see' what the surgeon sees--and more, for the robot could quantify its 3-D sensing and would 'see' in a wider spectral range than humans, and could zoom its 'eyes' from the macro world to long-distance microscopy. Unerring robot hands could rapidly perform machine-aided suturing with

  10. 3D Printed "Earable" Smart Devices for Real-Time Detection of Core Body Temperature.

    Science.gov (United States)

    Ota, Hiroki; Chao, Minghan; Gao, Yuji; Wu, Eric; Tai, Li-Chia; Chen, Kevin; Matsuoka, Yasutomo; Iwai, Kosuke; Fahad, Hossain M; Gao, Wei; Nyein, Hnin Yin Yin; Lin, Liwei; Javey, Ali

    2017-07-28

    Real-time detection of basic physiological parameters such as blood pressure and heart rate is an important target in wearable smart devices for healthcare. Among these, the core body temperature is one of the most important basic medical indicators of fever, insomnia, fatigue, metabolic functionality, and depression. However, traditional wearable temperature sensors are based upon the measurement of skin temperature, which can vary dramatically from the true core body temperature. Here, we demonstrate a three-dimensional (3D) printed wearable "earable" smart device that is designed to be worn on the ear to track core body temperature from the tympanic membrane (i.e., ear drum) based on an infrared sensor. The device is fully integrated with data processing circuits and a wireless module for standalone functionality. Using this smart earable device, we demonstrate that the core body temperature can be accurately monitored regardless of the environment and activity of the user. In addition, a microphone and actuator are also integrated so that the device can also function as a bone conduction hearing aid. Using 3D printing as the fabrication method enables the device to be customized for the wearer for more personalized healthcare. This smart device provides an important advance in realizing personalized health care by enabling real-time monitoring of one of the most important medical parameters, core body temperature, employed in preliminary medical screening tests.

  11. Development of a Virtual Museum Including a 4d Presentation of Building History in Virtual Reality

    Science.gov (United States)

    Kersten, T. P.; Tschirschwitz, F.; Deggim, S.

    2017-02-01

    In the last two decades the definition of the term "virtual museum" changed due to rapid technological developments. Using today's available 3D technologies a virtual museum is no longer just a presentation of collections on the Internet or a virtual tour of an exhibition using panoramic photography. On one hand, a virtual museum should enhance a museum visitor's experience by providing access to additional materials for review and knowledge deepening either before or after the real visit. On the other hand, a virtual museum should also be used as teaching material in the context of museum education. The laboratory for Photogrammetry & Laser Scanning of the HafenCity University Hamburg has developed a virtual museum (VM) of the museum "Alt-Segeberger Bürgerhaus", a historic town house. The VM offers two options for visitors wishing to explore the museum without travelling to the city of Bad Segeberg, Schleswig-Holstein, Germany. Option a, an interactive computer-based, tour for visitors to explore the exhibition and to collect information of interest or option b, to immerse into virtual reality in 3D with the HTC Vive Virtual Reality System.

  12. Virtualized endoscope system. An application of virtual reality technology to diagnostic aid

    International Nuclear Information System (INIS)

    Mori, Kensaku; Urano, Akihiro; Toriwaki, Jun-ichiro; Hasegawa, Jun-ichi; Anno, Hirofumi; Katada, Kazuhiro.

    1996-01-01

    In this paper we propose a new medical image processing system called 'Virtualized Endoscope System (VES)', which can examine the inside of a virtualized human body. The virtualized human body is a 3-D digital image which is taken by such as X-ray CT scanner or MRI scanner. VES consists of three modules; (1) imaging, (2) segmentation and reconstruction and (3) interactive operation. The interactive operation module has following three major functions; (a) display of, (b) measurement from, and (c) manipulation to the virtualized human body. The user of the system can observe freely both the inside and the outside of a target organ from any point and any direction freely, and can perform necessary measurement interactively concerning angle and length at any time during observation. VES enables to observe repeatedly an area where the real endoscope can not enter without pain from any direction which the real endoscope can not. We applied this system to real 3-D X-ray CT images and obtained good result. (author)

  13. Beyond Virtual Replicas: 3D Modeling and Maltese Prehistoric Architecture

    Directory of Open Access Journals (Sweden)

    Filippo Stanco

    2013-01-01

    Full Text Available In the past decade, computer graphics have become strategic for the development of projects aimed at the interpretation of archaeological evidence and the dissemination of scientific results to the public. Among all the solutions available, the use of 3D models is particularly relevant for the reconstruction of poorly preserved sites and monuments destroyed by natural causes or human actions. These digital replicas are, at the same time, a virtual environment that can be used as a tool for the interpretative hypotheses of archaeologists and as an effective medium for a visual description of the cultural heritage. In this paper, the innovative methodology and aims and outcomes of a virtual reconstruction of the Borg in-Nadur megalithic temple, carried out by Archeomatica Project of the University of Catania, are offered as a case study for a virtual archaeology of prehistoric Malta.

  14. Network Dynamics with BrainX3: A Large-Scale Simulation of the Human Brain Network with Real-Time Interaction

    Directory of Open Access Journals (Sweden)

    Xerxes D. Arsiwalla

    2015-02-01

    Full Text Available BrainX3 is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX3 in real-time by perturbing brain regions with transient stimulations to observe reverberating network activity, simulate lesion dynamics or implement network analysis functions from a library of graph theoretic measures. BrainX3 can thus be used as a novel immersive platform for real-time exploration and analysis of dynamical activity patterns in brain networks, both at rest or in a task-related state, for discovery of signaling pathways associated to brain function and/or dysfunction and as a tool for virtual neurosurgery. Our results demonstrate these functionalities and shed insight on the dynamics of the resting-state attractor. Specifically, we found that a noisy network seems to favor a low firing attractor state. We also found that the dynamics of a noisy network is less resilient to lesions. Our simulations on TMS perturbations show that even though TMS inhibits most of the network, it also sparsely excites a few regions. This is presumably, due to anti-correlations in the dynamics and suggests that even a lesioned network can show sparsely distributed increased activity compared to healthy resting-state, over specific brain areas.

  15. 3D virtual human rapid modeling method based on top-down modeling mechanism

    Directory of Open Access Journals (Sweden)

    LI Taotao

    2017-01-01

    Full Text Available Aiming to satisfy the vast custom-made character demand of 3D virtual human and the rapid modeling in the field of 3D virtual reality, a new virtual human top-down rapid modeling method is put for-ward in this paper based on the systematic analysis of the current situation and shortage of the virtual hu-man modeling technology. After the top-level realization of virtual human hierarchical structure frame de-sign, modular expression of the virtual human and parameter design for each module is achieved gradu-al-level downwards. While the relationship of connectors and mapping restraints among different modules is established, the definition of the size and texture parameter is also completed. Standardized process is meanwhile produced to support and adapt the virtual human top-down rapid modeling practice operation. Finally, the modeling application, which takes a Chinese captain character as an example, is carried out to validate the virtual human rapid modeling method based on top-down modeling mechanism. The result demonstrates high modelling efficiency and provides one new concept for 3D virtual human geometric mod-eling and texture modeling.

  16. Virtual Boutique: a 3D modeling and content-based management approach to e-commerce

    Science.gov (United States)

    Paquet, Eric; El-Hakim, Sabry F.

    2000-12-01

    The Virtual Boutique is made out of three modules: the decor, the market and the search engine. The decor is the physical space occupied by the Virtual Boutique. It can reproduce any existing boutique. For this purpose, photogrammetry is used. A set of pictures of a real boutique or space is taken and a virtual 3D representation of this space is calculated from them. Calculations are performed with software developed at NRC. This representation consists of meshes and texture maps. The camera used in the acquisition process determines the resolution of the texture maps. Decorative elements are added like painting, computer generated objects and scanned objects. The objects are scanned with laser scanner developed at NRC. This scanner allows simultaneous acquisition of range and color information based on white laser beam triangulation. The second module, the market, is made out of all the merchandises and the manipulators, which are used to manipulate and compare the objects. The third module, the search engine, can search the inventory based on an object shown by the customer in order to retrieve similar objects base don shape and color. The items of interest are displayed in the boutique by reconfiguring the market space, which mean that the boutique can be continuously customized according to the customer's needs. The Virtual Boutique is entirely written in Java 3D and can run in mono and stereo mode and has been optimized in order to allow high quality rendering.

  17. Image fusion in craniofacial virtual reality modeling based on CT and 3dMD photogrammetry.

    Science.gov (United States)

    Xin, Pengfei; Yu, Hongbo; Cheng, Huanchong; Shen, Shunyao; Shen, Steve G F

    2013-09-01

    The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.

  18. Virtual 3D planning of tracheostomy placement and clinical applicability of 3D cannula design : A three-step study

    NARCIS (Netherlands)

    de Kleijn, Bertram J; Kraeima, Joep; Wachters, Jasper E; van der Laan, Bernard F A M; Wedman, Jan; Witjes, M J H; Halmos, Gyorgy B

    AIM: We aimed to investigate the potential of 3D virtual planning of tracheostomy tube placement and 3D cannula design to prevent tracheostomy complications due to inadequate cannula position. MATERIALS AND METHODS: 3D models of commercially available cannula were positioned in 3D models of the

  19. Enhancing Time-Connectives with 3D Immersive Virtual Reality (IVR)

    Science.gov (United States)

    Passig, David; Eden, Sigal

    2010-01-01

    This study sought to test the most efficient representation mode with which children with hearing impairment could express a story while producing connectives indicating relations of time and of cause and effect. Using Bruner's (1973, 1986, 1990) representation stages, we tested the comparative effectiveness of Virtual Reality (VR) as a mode of…

  20. A standardized set of 3-D objects for virtual reality research and applications.

    Science.gov (United States)

    Peeters, David

    2018-06-01

    The use of immersive virtual reality as a research tool is rapidly increasing in numerous scientific disciplines. By combining ecological validity with strict experimental control, immersive virtual reality provides the potential to develop and test scientific theories in rich environments that closely resemble everyday settings. This article introduces the first standardized database of colored three-dimensional (3-D) objects that can be used in virtual reality and augmented reality research and applications. The 147 objects have been normed for name agreement, image agreement, familiarity, visual complexity, and corresponding lexical characteristics of the modal object names. The availability of standardized 3-D objects for virtual reality research is important, because reaching valid theoretical conclusions hinges critically on the use of well-controlled experimental stimuli. Sharing standardized 3-D objects across different virtual reality labs will allow for science to move forward more quickly.

  1. Real-time interactive 3D manipulation of particles viewed in two orthogonal observation planes

    DEFF Research Database (Denmark)

    Perch-Nielsen, I.; Rodrigo, P.J.; Glückstad, J.

    2005-01-01

    The generalized phase contrast (GPC) method has been applied to transform a single TEM00 beam into a manifold of counterpropagating-beam traps capable of real-time interactive manipulation of multiple microparticles in three dimensions (3D). This paper reports on the use of low numerical aperture...... for imaging through each of the two opposing objective lenses. As a consequence of the large working distance, simultaneous monitoring of the trapped particles in a second orthogonal observation plane is demonstrated. (C) 2005 Optical Society of America....

  2. A new approach towards image based virtual 3D city modeling by using close range photogrammetry

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2014-05-01

    and high resolution satellite images are costly. In this study, proposed method is based on only simple video recording of area. Thus this proposed method is suitable for 3D city modeling. Photo-realistic, scalable, geo-referenced virtual 3D city model is useful for various kinds of applications such as for planning in navigation, tourism, disasters management, transportations, municipality, urban and environmental managements, real-estate industry. Thus this study will provide a good roadmap for geomatics community to create photo-realistic virtual 3D city model by using close range photogrammetry.

  3. 4D Near Real-Time Environmental Monitoring Using Highly Temporal LiDAR

    Science.gov (United States)

    Höfle, Bernhard; Canli, Ekrem; Schmitz, Evelyn; Crommelinck, Sophie; Hoffmeister, Dirk; Glade, Thomas

    2016-04-01

    The last decade has witnessed extensive applications of 3D environmental monitoring with the LiDAR technology, also referred to as laser scanning. Although several automatic methods were developed to extract environmental parameters from LiDAR point clouds, only little research has focused on highly multitemporal near real-time LiDAR (4D-LiDAR) for environmental monitoring. Large potential of applying 4D-LiDAR is given for landscape objects with high and varying rates of change (e.g. plant growth) and also for phenomena with sudden unpredictable changes (e.g. geomorphological processes). In this presentation we will report on the most recent findings of the research projects 4DEMON (http://uni-heidelberg.de/4demon) and NoeSLIDE (https://geomorph.univie.ac.at/forschung/projekte/aktuell/noeslide/). The method development in both projects is based on two real-world use cases: i) Surface parameter derivation of agricultural crops (e.g. crop height) and ii) change detection of landslides. Both projects exploit the "full history" contained in the LiDAR point cloud time series. One crucial initial step of 4D-LiDAR analysis is the co-registration over time, 3D-georeferencing and time-dependent quality assessment of the LiDAR point cloud time series. Due to the high amount of datasets (e.g. one full LiDAR scan per day), the procedure needs to be performed fully automatically. Furthermore, the online near real-time 4D monitoring system requires to set triggers that can detect removal or moving of tie reflectors (used for co-registration) or the scanner itself. This guarantees long-term data acquisition with high quality. We will present results from a georeferencing experiment for 4D-LiDAR monitoring, which performs benchmarking of co-registration, 3D-georeferencing and also fully automatic detection of events (e.g. removal/moving of reflectors or scanner). Secondly, we will show our empirical findings of an ongoing permanent LiDAR observation of a landslide (Gresten

  4. Interreality in practice: bridging virtual and real worlds in the treatment of posttraumatic stress disorders.

    Science.gov (United States)

    Riva, Giuseppe; Raspelli, Simona; Algeri, Davide; Pallavicini, Federica; Gorini, Alessandra; Wiederhold, Brenda K; Gaggioli, Andrea

    2010-02-01

    The use of new technologies, particularly virtual reality, is not new in the treatment of posttraumatic stress disorders (PTSD): VR is used to facilitate the activation of the traumatic event during exposure therapy. However, during the therapy, VR is a new and distinct realm, separate from the emotions and behaviors experienced by the patient in the real world: the behavior of the patient in VR has no direct effects on the real-life experience; the emotions and problems experienced by the patient in the real world are not directly addressed in the VR exposure. In this article, we suggest that the use of a new technological paradigm, Interreality, may improve the clinical outcome of PTSD. The main feature of Interreality is a twofold link between the virtual and real worlds: (a) behavior in the physical world influences the experience in the virtual one; (b) behavior in the virtual world influences the experience in the real one. This is achieved through 3D shared virtual worlds; biosensors and activity sensors (from the real to the virtual world); and personal digital assistants and/or mobile phones (from the virtual world to the real one). We describe different technologies that are involved in the Interreality vision and its clinical rationale. To illustrate the concept of Interreality in practice, a clinical scenario is also presented and discussed: Rosa, a 55-year-old nurse, involved in a major car accident.

  5. See-through 3D technology for augmented reality

    Science.gov (United States)

    Lee, Byoungho; Lee, Seungjae; Li, Gang; Jang, Changwon; Hong, Jong-Young

    2017-06-01

    Augmented reality is recently attracting a lot of attention as one of the most spotlighted next-generation technologies. In order to get toward realization of ideal augmented reality, we need to integrate 3D virtual information into real world. This integration should not be noticed by users blurring the boundary between the virtual and real worlds. Thus, ultimate device for augmented reality can reconstruct and superimpose 3D virtual information on the real world so that they are not distinguishable, which is referred to as see-through 3D technology. Here, we introduce our previous researches to combine see-through displays and 3D technologies using emerging optical combiners: holographic optical elements and index matched optical elements. Holographic optical elements are volume gratings that have angular and wavelength selectivity. Index matched optical elements are partially reflective elements using a compensation element for index matching. Using these optical combiners, we could implement see-through 3D displays based on typical methodologies including integral imaging, digital holographic displays, multi-layer displays, and retinal projection. Some of these methods are expected to be optimized and customized for head-mounted or wearable displays. We conclude with demonstration and analysis of fundamental researches for head-mounted see-through 3D displays.

  6. Using virtual ridge augmentation and 3D printing to fabricate a titanium mesh positioning device: A novel technique letter.

    Science.gov (United States)

    Al-Ardah, Aladdin; Alqahtani, Nasser; AlHelal, Abdulaziz; Goodacre, Brian; Swamidass, Rajesh; Garbacea, Antoanela; Lozada, Jaime

    2018-05-02

    This technique describes a novel approach for planning and augmenting a large bony defect using a titanium mesh (TiMe). A 3-dimensional (3D) surgical model was virtually created from a cone beam computed tomography (CBCT) and wax-pattern of the final prosthetic outcome. The required bone volume (horizontally and vertically) was digitally augmented and then 3D printed to create a bone model. The 3D model was then used to contour the TiMe in accordance with the digital augmentation. With the contoured / preformed TiMe on the 3D printed model a positioning jig was made to aid the placement of the TiMe as planned during surgery. Although this technique does not impact the final outcome of the augmentation procedure, it allows the clinician to virtually design the augmentation, preform and contour the TiMe, and create a positioning jig reducing surgical time and error.

  7. Application of 3d Model of Cultural Relics in Virtual Restoration

    Science.gov (United States)

    Zhao, S.; Hou, M.; Hu, Y.; Zhao, Q.

    2018-04-01

    In the traditional cultural relics splicing process, in order to identify the correct spatial location of the cultural relics debris, experts need to manually splice the existing debris. The repeated contact between debris can easily cause secondary damage to the cultural relics. In this paper, the application process of 3D model of cultural relic in virtual restoration is put forward, and the relevant processes and ideas are verified with the example of Terracotta Warriors data. Through the combination of traditional cultural relics restoration methods and computer virtual reality technology, virtual restoration of high-precision 3D models of cultural relics can provide a scientific reference for virtual restoration, avoiding the secondary damage to the cultural relics caused by improper restoration. The efficiency and safety of the preservation and restoration of cultural relics have been improved.

  8. APPLICATION OF 3D MODEL OF CULTURAL RELICS IN VIRTUAL RESTORATION

    Directory of Open Access Journals (Sweden)

    S. Zhao

    2018-04-01

    Full Text Available In the traditional cultural relics splicing process, in order to identify the correct spatial location of the cultural relics debris, experts need to manually splice the existing debris. The repeated contact between debris can easily cause secondary damage to the cultural relics. In this paper, the application process of 3D model of cultural relic in virtual restoration is put forward, and the relevant processes and ideas are verified with the example of Terracotta Warriors data. Through the combination of traditional cultural relics restoration methods and computer virtual reality technology, virtual restoration of high-precision 3D models of cultural relics can provide a scientific reference for virtual restoration, avoiding the secondary damage to the cultural relics caused by improper restoration. The efficiency and safety of the preservation and restoration of cultural relics have been improved.

  9. EUROPEANA AND 3D

    Directory of Open Access Journals (Sweden)

    D. Pletinckx

    2012-09-01

    Full Text Available The current 3D hype creates a lot of interest in 3D. People go to 3D movies, but are we ready to use 3D in our homes, in our offices, in our communication? Are we ready to deliver real 3D to a general public and use interactive 3D in a meaningful way to enjoy, learn, communicate? The CARARE project is realising this for the moment in the domain of monuments and archaeology, so that real 3D of archaeological sites and European monuments will be available to the general public by 2012. There are several aspects to this endeavour. First of all is the technical aspect of flawlessly delivering 3D content over all platforms and operating systems, without installing software. We have currently a working solution in PDF, but HTML5 will probably be the future. Secondly, there is still little knowledge on how to create 3D learning objects, 3D tourist information or 3D scholarly communication. We are still in a prototype phase when it comes to integrate 3D objects in physical or virtual museums. Nevertheless, Europeana has a tremendous potential as a multi-facetted virtual museum. Finally, 3D has a large potential to act as a hub of information, linking to related 2D imagery, texts, video, sound. We describe how to create such rich, explorable 3D objects that can be used intuitively by the generic Europeana user and what metadata is needed to support the semantic linking.

  10. Hybrid Design Tools in a Social Virtual Reality Using Networked Oculus Rift: A Feasibility Study in Remote Real-Time Interaction

    NARCIS (Netherlands)

    Wendrich, Robert E.; Chambers, Kris-Howard; Al-Halabi, Wadee; Seibel, Eric J.; Grevenstuk, Olaf; Ullman, David; Hoffman, Hunter G.

    2016-01-01

    Hybrid Design Tool Environments (HDTE) allow designers and engineers to use real tangible tools and physical objects and/or artifacts to make and create real-time virtual representations and presentations on-the-fly. Manipulations of the real tangible objects (e.g., real wire mesh, clay, sketches,

  11. Monitoring the effects of doxorubicin on 3D-spheroid tumor cells in real-time

    Directory of Open Access Journals (Sweden)

    Baek N

    2016-11-01

    Full Text Available NamHuk Baek,1,* Ok Won Seo,1,* MinSung Kim,1 John Hulme,2 Seong Soo A An2 1Department of R & D, NanoEntek Inc., Seoul, Republic of Korea; 2Department of BioNano Technology Gachon University, Gyeonggi-do, Republic of Korea *These authors contributed equally to this work Abstract: Recently, increasing numbers of cell culture experiments with 3D spheroids presented better correlating results in vivo than traditional 2D cell culture systems. 3D spheroids could offer a simple and highly reproducible model that would exhibit many characteristics of natural tissue, such as the production of extracellular matrix. In this paper numerous cell lines were screened and selected depending on their ability to form and maintain a spherical shape. The effects of increasing concentrations of doxorubicin (DXR on the integrity and viability of the selected spheroids were then measured at regular intervals and in real-time. In total 12 cell lines, adenocarcinomic alveolar basal epithelial (A549, muscle (C2C12, prostate (DU145, testis (F9, pituitary epithelial-like (GH3, cervical cancer (HeLa, HeLa contaminant (HEp2, embryo (NIH3T3, embryo (PA317, neuroblastoma (SH-SY5Y, osteosarcoma U2OS, and embryonic kidney cells (293T, were screened. Out of the 12, 8 cell lines, NIH3T3, C2C12, 293T, SH-SY5Y, A549, HeLa, PA317, and U2OS formed regular spheroids and the effects of DXR on these structures were measured at regular intervals. Finally, 5 cell lines, A549, HeLa, SH-SY5Y, U2OS, and 293T, were selected for real-time monitoring and the effects of DXR treatment on their behavior were continuously recorded for 5 days. A potential correlation regarding the effects of DXR on spheroid viability and ATP production was measured on days 1, 3, and 5. Cytotoxicity of DXR seemed to occur after endocytosis, since the cellular activities and ATP productions were still viable after 1 day of the treatment in all spheroids, except SH-SY5Y. Both cellular activity and ATP production were

  12. 3D virtual environment of Taman Mini Indonesia Indah in a web

    Science.gov (United States)

    Wardijono, B. A.; Wardhani, I. P.; Chandra, Y. I.; Pamungkas, B. U. G.

    2018-05-01

    Taman Mini Indonesia Indah known as TMII is a largest recreational park based on culture in Indonesia. This park has 250 acres that consist of houses from provinces in Indonesia. In TMII, there are traditional houses of the various provinces in Indonesia. The official website of TMII has informed the traditional houses, but the information was limited to public. To provide information more detail about TMII to the public, this research aims to create and develop virtual traditional houses as 3d graphics models and show it via website. The Virtual Reality (VR) technology was used to display the visualization of the TMII and the surrounding environment. This research used Blender software to create the 3D models and Unity3D software to make virtual reality models that can be showed on a web. This research has successfully created 33 virtual traditional houses of province in Indonesia. The texture of traditional house was taken from original to make the culture house realistic. The result of this research was the website of TMII including virtual culture houses that can be displayed through the web browser. The website consists of virtual environment scenes and internet user can walkthrough and navigates inside the scenes.

  13. Time multiplexing for increased FOV and resolution in virtual reality

    Science.gov (United States)

    Miñano, Juan C.; Benitez, Pablo; Grabovičkić, Dejan; Zamora, Pablo; Buljan, Marina; Narasimhan, Bharathwaj

    2017-06-01

    We introduce a time multiplexing strategy to increase the total pixel count of the virtual image seen in a VR headset. This translates into an improvement of the pixel density or the Field of View FOV (or both) A given virtual image is displayed by generating a succession of partial real images, each representing part of the virtual image and together representing the virtual image. Each partial real image uses the full set of physical pixels available in the display. The partial real images are successively formed and combine spatially and temporally to form a virtual image viewable from the eye position. Partial real images are imaged through different optical channels depending of its time slot. Shutters or other schemes are used to avoid that a partial real image be imaged through the wrong optical channels or at the wrong time slot. This time multiplexing strategy needs real images be shown at high frame rates (>120fps). Available display and shutters technologies are discussed. Several optical designs for achieving this time multiplexing scheme in a compact format are shown. This time multiplexing scheme allows increasing the resolution/FOV of the virtual image not only by increasing the physical pixel density but also by decreasing the pixels switching time, a feature that may be simpler to achieve in certain circumstances.

  14. Computer Tool for Automatically Generated 3D Illustration in Real Time from Archaeological Scanned Pieces

    OpenAIRE

    Luis López; Germán Arroyo; Domingo Martín

    2012-01-01

    The graphical documentation process of archaeological pieces requires the active involvement of a professional artist to recreate beautiful illustrations using a wide variety of expressive techniques. Frequently, the artist’s work is limited by the inconvenience of working only with the photographs of the pieces he is going to illustrate. This paper presents a software tool that allows the easy generation of illustrations in real time from 3D scanned models. The developed interface allows the...

  15. 4-D ICE: A 2-D Array Transducer With Integrated ASIC in a 10-Fr Catheter for Real-Time 3-D Intracardiac Echocardiography.

    Science.gov (United States)

    Wildes, Douglas; Lee, Warren; Haider, Bruno; Cogan, Scott; Sundaresan, Krishnakumar; Mills, David M; Yetter, Christopher; Hart, Patrick H; Haun, Christopher R; Concepcion, Mikael; Kirkhorn, Johan; Bitoun, Marc

    2016-12-01

    We developed a 2.5 ×6.6 mm 2 2 -D array transducer with integrated transmit/receive application-specific integrated circuit (ASIC) for real-time 3-D intracardiac echocardiography (4-D ICE) applications. The ASIC and transducer design were optimized so that the high-voltage transmit, low-voltage time-gain control and preamp, subaperture beamformer, and digital control circuits for each transducer element all fit within the 0.019-mm 2 area of the element. The transducer assembly was deployed in a 10-Fr (3.3-mm diameter) catheter, integrated with a GE Vivid E9 ultrasound imaging system, and evaluated in three preclinical studies. The 2-D image quality and imaging modes were comparable to commercial 2-D ICE catheters. The 4-D field of view was at least 90 ° ×60 ° ×8 cm and could be imaged at 30 vol/s, sufficient to visualize cardiac anatomy and other diagnostic and therapy catheters. 4-D ICE should significantly reduce X-ray fluoroscopy use and dose during electrophysiology ablation procedures. 4-D ICE may be able to replace transesophageal echocardiography (TEE), and the associated risks and costs of general anesthesia, for guidance of some structural heart procedures.

  16. Real-time viability and apoptosis kinetic detection method of 3D multicellular tumor spheroids using the Celigo Image Cytometer.

    Science.gov (United States)

    Kessel, Sarah; Cribbes, Scott; Bonasu, Surekha; Rice, William; Qiu, Jean; Chan, Leo Li-Ying

    2017-09-01

    The development of three-dimensional (3D) multicellular tumor spheroid models for cancer drug discovery research has increased in the recent years. The use of 3D tumor spheroid models may be more representative of the complex in vivo tumor microenvironments in comparison to two-dimensional (2D) assays. Currently, viability of 3D multicellular tumor spheroids has been commonly measured on standard plate-readers using metabolic reagents such as CellTiter-Glo® for end point analysis. Alternatively, high content image cytometers have been used to measure drug effects on spheroid size and viability. Previously, we have demonstrated a novel end point drug screening method for 3D multicellular tumor spheroids using the Celigo Image Cytometer. To better characterize the cancer drug effects, it is important to also measure the kinetic cytotoxic and apoptotic effects on 3D multicellular tumor spheroids. In this work, we demonstrate the use of PI and caspase 3/7 stains to measure viability and apoptosis for 3D multicellular tumor spheroids in real-time. The method was first validated by staining different types of tumor spheroids with PI and caspase 3/7 and monitoring the fluorescent intensities for 16 and 21 days. Next, PI-stained and nonstained control tumor spheroids were digested into single cell suspension to directly measure viability in a 2D assay to determine the potential toxicity of PI. Finally, extensive data analysis was performed on correlating the time-dependent PI and caspase 3/7 fluorescent intensities to the spheroid size and necrotic core formation to determine an optimal starting time point for cancer drug testing. The ability to measure real-time viability and apoptosis is highly important for developing a proper 3D model for screening tumor spheroids, which can allow researchers to determine time-dependent drug effects that usually are not captured by end point assays. This would improve the current tumor spheroid analysis method to potentially better

  17. Virtual reality 3D headset based on DMD light modulators

    Science.gov (United States)

    Bernacki, Bruce E.; Evans, Allan; Tang, Edward

    2014-06-01

    We present the design of an immersion-type 3D headset suitable for virtual reality applications based upon digital micromirror devices (DMD). Current methods for presenting information for virtual reality are focused on either polarizationbased modulators such as liquid crystal on silicon (LCoS) devices, or miniature LCD or LED displays often using lenses to place the image at infinity. LCoS modulators are an area of active research and development, and reduce the amount of viewing light by 50% due to the use of polarization. Viewable LCD or LED screens may suffer low resolution, cause eye fatigue, and exhibit a "screen door" or pixelation effect due to the low pixel fill factor. Our approach leverages a mature technology based on silicon micro mirrors delivering 720p resolution displays in a small form-factor with high fill factor. Supporting chip sets allow rapid integration of these devices into wearable displays with high-definition resolution and low power consumption, and many of the design methods developed for DMD projector applications can be adapted to display use. Potential applications include night driving with natural depth perception, piloting of UAVs, fusion of multiple sensors for pilots, training, vision diagnostics and consumer gaming. Our design concept is described in which light from the DMD is imaged to infinity and the user's own eye lens forms a real image on the user's retina resulting in a virtual retinal display.

  18. Current status of DIII-D real-time digital plasma control

    International Nuclear Information System (INIS)

    Penaflor, B.G.; Piglowski, D.A.; Ferron, J.R.; Walker, M.L.

    1999-06-01

    This paper describes the current status of real-time digital plasma control for the DIII-D tokamak. The digital plasma control system (PCS) has been in place at DIII-D since the early 1990s and continues to expand and improve in its capabilities to monitor and control plasma parameters for DIII-D fusion science experiments. The PCs monitors over 200 tokamak parameters from the DIII-D experiment using a real-time data acquisition system that acquires a new set of samples once every 60 micros. This information is then used in a number of feedback control algorithms to compute and control a variety of parameters including those affecting plasma shape and position. A number of system related improvements has improved the usability and flexibility of the DIII-D PCS. These include more graphical user interfaces to assist in entering and viewing the large and ever growing number of parameters controlled by the PCS, increased interaction and accessibility from other DIII-D applications, and upgrades to the computer hardware and vended software. Future plans for the system include possible upgrades of the real-time computers, further links to other DIII-D diagnostic measurements such as real-time Thomson scattering analysis, and joint collaborations with other tokamak experiments including the NSTX at Princeton

  19. Comparative analysis of video processing and 3D rendering for cloud video games using different virtualization technologies

    Science.gov (United States)

    Bada, Adedayo; Alcaraz-Calero, Jose M.; Wang, Qi; Grecos, Christos

    2014-05-01

    This paper describes a comprehensive empirical performance evaluation of 3D video processing employing the physical/virtual architecture implemented in a cloud environment. Different virtualization technologies, virtual video cards and various 3D benchmarks tools have been utilized in order to analyse the optimal performance in the context of 3D online gaming applications. This study highlights 3D video rendering performance under each type of hypervisors, and other factors including network I/O, disk I/O and memory usage. Comparisons of these factors under well-known virtual display technologies such as VNC, Spice and Virtual 3D adaptors reveal the strengths and weaknesses of the various hypervisors with respect to 3D video rendering and streaming.

  20. The role of virtual reality and 3D modelling in built environment education

    OpenAIRE

    Horne, Margaret; Thompson, Emine Mine

    2007-01-01

    This study builds upon previous research on the integration of Virtual Reality (VR) within the built environment curriculum and aims to investigate the role of Virtual Reality and three-dimensional (3D) computer modelling on learning and teaching in a school of the built environment. In order to achieve this aim a number of academic experiences were analysed to explore the applicability and viability of 3D computer modelling and Virtual Reality (VR) into built environment subject areas. Altho...

  1. Development of CT and 3D-CT Using Flat Panel Detector Based Real-Time Digital Radiography System

    International Nuclear Information System (INIS)

    Ravindran, V. R.; Sreelakshmi, C.; Vibin

    2008-01-01

    The application of Digital Radiography in the Nondestructive Evaluation (NDE) of space vehicle components is a recent development in India. A Real-time DR system based on amorphous silicon Flat Panel Detector has been developed for the NDE of solid rocket motors at Rocket Propellant Plant of VSSC in a few years back. The technique has been successfully established for the nondestructive evaluation of solid rocket motors. The DR images recorded for a few solid rocket specimens are presented in the paper. The Real-time DR system is capable of generating sufficient digital X-ray image data with object rotation for the CT image reconstruction. In this paper the indigenous development of CT imaging based on the Realtime DR system for solid rocket motor is presented. Studies are also carried out to generate 3D-CT image from a set of adjacent CT images of the rocket motor. The capability of revealing the spatial location and characterisation of defect is demonstrated by the CT and 3D-CT images generated.

  2. 3D virtual reconstruction and visualisation of the archaeological site Castellet de Bernabé (Llíria, Spain

    Directory of Open Access Journals (Sweden)

    Cristina Portalés

    2017-05-01

    Full Text Available 3D virtual reconstruction of cultural heritage is a useful tool to reach many goals: the accurate documentation of our tangible cultural legacy, the determination of mechanical alteration on the assets, or the mere shape acquisition prior to restoration and/or reconstruction works, etc. Among these goals, when planning and managing tourism enhancement of heritage sites, it demands setting up specific instruments and tools to guarantee both, the site conservation and the visitors’ satisfaction. Archaeological sites are physical witnesses of the past and an open window to research works and scientific discoveries, but usually, the major structures do no exist nowadays, and the general public takes long time and many efforts to elaborate a mental reconstruction of the volumetry and appearance from these remains. This mental reconstruction is essential to build up a storyline that communicates efficiently the archaeological and historic knowledge and awares the public about its conservation. To develop this process of awareness about conservation, heritage interpretation starts with the mental inmersion of the visitors in the archaeological site, what 3D reconstruction definitely helps to achieve. Different technologies exist nowadays for the3D reconstruction of assets, but when dealing with archaeological sites, the data acquisition requires alternative approaches to be used, as most part of the assets do not exist nowadays. In this work, we will deal with the virtual reconstruction and visualisation of the archaeological site Castellet de Bernabé by following a mixed approach (surveying techniques and archaeological research. We further give a methodology to process and merge the real and virtual data in order to create augmented views of the site.

  3. 3D virtual character reconstruction from projections: a NURBS-based approach

    Science.gov (United States)

    Triki, Olfa; Zaharia, Titus B.; Preteux, Francoise J.

    2004-05-01

    This work has been carried out within the framework of the industrial project, so-called TOON, supported by the French government. TOON aims at developing tools for automating the traditional 2D cartoon content production. This paper presents preliminary results of the TOON platform. The proposed methodology concerns the issues of 2D/3D reconstruction from a limited number of drawn projections, and 2D/3D manipulation/deformation/refinement of virtual characters. Specifically, we show that the NURBS-based modeling approach developed here offers a well-suited framework for generating deformable 3D virtual characters from incomplete 2D information. Furthermore, crucial functionalities such as animation and non-rigid deformation can be also efficiently handled and solved. Note that user interaction is enabled exclusively in 2D by achieving a multiview constraint specification method. This is fully consistent and compliant with the cartoon creator traditional practice and makes it possible to avoid the use of 3D modeling software packages which are generally complex to manipulate.

  4. Autonomous and 3D real-time multi-beam manipulation in a microfluidic environment

    DEFF Research Database (Denmark)

    Perch-Nielsen, I.; Rodrigo, P.J.; Alonzo, C.A.

    2006-01-01

    The Generalized Phase Contrast (GPC) method of optical 3D manipulation has previously been used for controlled spatial manipulation of live biological specimen in real-time. These biological experiments were carried out over a time-span of several hours while an operator intermittently optimized...... the optical system. Here we present GPC-based optical micromanipulation in a microfluidic system where trapping experiments are computer-automated and thereby capable of running with only limited supervision. The system is able to dynamically detect living yeast cells using a computer-interfaced CCD camera......, and respond to this by instantly creating traps at positions of the spotted cells streaming at flow velocities that would be difficult for a human operator to handle. With the added ability to control flow rates, experiments were also carried out to confirm the theoretically predicted axially dependent...

  5. Avatar-mediation and Transformation of Practice in a 3D Virtual World

    DEFF Research Database (Denmark)

    Riis, Marianne

    The purpose of this study is to understand and conceptualize the transformation of a particular community of pedagogical practice based on the implementation of the 3D virtual world, Second Life™. The community setting is a course at the Master's programme on ICT and Learning (MIL), Aalborg...... with knowledge about 3D Virtual Worlds, the influence of the avatar phenomenon and the consequences of 3D-remediation in relation to teaching and learning in online education. Based on the findings, a conceptual design model, a set of design principles, and a design framework has been developed....

  6. Real-time 2-D Phased Array Vector Flow Imaging

    DEFF Research Database (Denmark)

    Holbek, Simon; Hansen, Kristoffer Lindskov; Fogh, Nikolaj

    2018-01-01

    Echocardiography examination of the blood flow is currently either restricted to 1-D techniques in real-time or experimental off-line 2-D methods. This paper presents an implementation of transverse oscillation for real-time 2-D vector flow imaging (VFI) on a commercial BK Ultrasound scanner....... A large field-of-view (FOV) sequence for studying flow dynamics at 11 frames per second (fps) and a sequence for studying peak systolic velocities (PSV) with a narrow FOV at 36 fps were validated. The VFI sequences were validated in a flow-rig with continuous laminar parabolic flow and in a pulsating flow...

  7. Three Dimensional (3D) Printing: A Straightforward, User-Friendly Protocol to Convert Virtual Chemical Models to Real-Life Objects

    Science.gov (United States)

    Rossi, Sergio; Benaglia, Maurizio; Brenna, Davide; Porta, Riccardo; Orlandi, Manuel

    2015-01-01

    A simple procedure to convert protein data bank files (.pdb) into a stereolithography file (.stl) using VMD software (Virtual Molecular Dynamic) is reported. This tutorial allows generating, with a very simple protocol, three-dimensional customized structures that can be printed by a low-cost 3D-printer, and used for teaching chemical education…

  8. Measurable realistic image-based 3D mapping

    Science.gov (United States)

    Liu, W.; Wang, J.; Wang, J. J.; Ding, W.; Almagbile, A.

    2011-12-01

    Maps with 3D visual models are becoming a remarkable feature of 3D map services. High-resolution image data is obtained for the construction of 3D visualized models.The3D map not only provides the capabilities of 3D measurements and knowledge mining, but also provides the virtual experienceof places of interest, such as demonstrated in the Google Earth. Applications of 3D maps are expanding into the areas of architecture, property management, and urban environment monitoring. However, the reconstruction of high quality 3D models is time consuming, and requires robust hardware and powerful software to handle the enormous amount of data. This is especially for automatic implementation of 3D models and the representation of complicated surfacesthat still need improvements with in the visualisation techniques. The shortcoming of 3D model-based maps is the limitation of detailed coverage since a user can only view and measure objects that are already modelled in the virtual environment. This paper proposes and demonstrates a 3D map concept that is realistic and image-based, that enables geometric measurements and geo-location services. Additionally, image-based 3D maps provide more detailed information of the real world than 3D model-based maps. The image-based 3D maps use geo-referenced stereo images or panoramic images. The geometric relationships between objects in the images can be resolved from the geometric model of stereo images. The panoramic function makes 3D maps more interactive with users but also creates an interesting immersive circumstance. Actually, unmeasurable image-based 3D maps already exist, such as Google street view, but only provide virtual experiences in terms of photos. The topographic and terrain attributes, such as shapes and heights though are omitted. This paper also discusses the potential for using a low cost land Mobile Mapping System (MMS) to implement realistic image 3D mapping, and evaluates the positioning accuracy that a measureable

  9. Clinical value of real time 3D sonohysterography and 2D sonohysterography in comparison to hysteroscopy with subsequent histopathological examination in perimenopausal women with abnormal uterine bleeding.

    Science.gov (United States)

    Kowalczyk, Dariusz; Guzikowski, Wojciech; Więcek, Jacek; Sioma-Markowska, Urszula

    2012-01-01

    In many publications the transvaginal ultrasound is regarded as the first step to diagnose the cause of uterine bleeding in perimenopausal women. In order to improve the sensitivity and specificity of the conventional ultrasound physiological saline solution was administered to the uterine cavity and after expansion of its walls the interior uterine cavity was examined. And this procedure is called 2D sonohysterography (SIS 2D). By the ultrasound scanners which enable to get 3D real time image a spatial evaluation of the uterine cavity is possible. Clinical value of the real time 3D sonohysterography and 2D sonohysterography compared to hysteroscopy with histopathological examination in perimenopausal women. The study concerned a group of 97 perimenopausal women with abnormal uterine bleeding. In all of them after a standard transvaginal ultrasonography a catheter was inserted into the uterine cavity. After expansion of the uterine walls by administering about 10 ml of 0,9% saline solution the uterine cavity was examined by conventional sonohysterography. Then a 3D imaging mode was activated and the uterine interior was examined by real time 3D ultrasonography. The ultrasound results were verified by hysteroscopy, the endometrial lesions were removed and underwent a histopathological examination. In two cases the SIS examination was impossible because of uterine cervix atresion. In the rest of examined group the SIS 2D sensitivity and specificity came up to 72 and 96% respectively. In the group of SIS 3D the sensitivity and specificity reached 83 and 99% respectively. Adding SIS 3D, a minimally invasive method, to conventional sonohysterography improves the precision of diagnosis of endometrial pathology, allows to get three-dimensional image of the uterine cavity and enables examination of endometrial lesions. The diagnostic precision of this procedure is similar to the results achieved by hysteroscopy.

  10. LandSIM3D: modellazione in real time 3D di dati geografici

    Directory of Open Access Journals (Sweden)

    Lambo Srl Lambo Srl

    2009-03-01

    Full Text Available LandSIM3D: realtime 3D modelling of geographic data LandSIM3D allows to model in 3D an existing landscape in a few hours only and geo-referenced offering great landscape analysis and understanding tools. 3D projects can then be inserted into the existing landscape with ease and precision. The project alternatives and impact can then be visualized and studied into their immediate environmental. The complex evolution of the landscape in the future can also be simulated and the landscape model can be manipulated interactively and better shared with colleagues. For that reason, LandSIM3D is different from traditional 3D imagery solutions, normally reserved for computer graphics experts. For more information about LandSIM3D, go to www.landsim3d.com.

  11. Online 4D ultrasound guidance for real-time motion compensation by MLC tracking.

    Science.gov (United States)

    Ipsen, Svenja; Bruder, Ralf; O'Brien, Rick; Keall, Paul J; Schweikard, Achim; Poulsen, Per R

    2016-10-01

    With the trend in radiotherapy moving toward dose escalation and hypofractionation, the need for highly accurate targeting increases. While MLC tracking is already being successfully used for motion compensation of moving targets in the prostate, current real-time target localization methods rely on repeated x-ray imaging and implanted fiducial markers or electromagnetic transponders rather than direct target visualization. In contrast, ultrasound imaging can yield volumetric data in real-time (3D + time = 4D) without ionizing radiation. The authors report the first results of combining these promising techniques-online 4D ultrasound guidance and MLC tracking-in a phantom. A software framework for real-time target localization was installed directly on a 4D ultrasound station and used to detect a 2 mm spherical lead marker inside a water tank. The lead marker was rigidly attached to a motion stage programmed to reproduce nine characteristic tumor trajectories chosen from large databases (five prostate, four lung). The 3D marker position detected by ultrasound was transferred to a computer program for MLC tracking at a rate of 21.3 Hz and used for real-time MLC aperture adaption on a conventional linear accelerator. The tracking system latency was measured using sinusoidal trajectories and compensated for by applying a kernel density prediction algorithm for the lung traces. To measure geometric accuracy, static anterior and lateral conformal fields as well as a 358° arc with a 10 cm circular aperture were delivered for each trajectory. The two-dimensional (2D) geometric tracking error was measured as the difference between marker position and MLC aperture center in continuously acquired portal images. For dosimetric evaluation, VMAT treatment plans with high and low modulation were delivered to a biplanar diode array dosimeter using the same trajectories. Dose measurements with and without MLC tracking were compared to a static reference dose using 3%/3 mm and 2

  12. Faster acquisition of laparoscopic skills in virtual reality with haptic feedback and 3D vision.

    Science.gov (United States)

    Hagelsteen, Kristine; Langegård, Anders; Lantz, Adam; Ekelund, Mikael; Anderberg, Magnus; Bergenfelz, Anders

    2017-10-01

    The study investigated whether 3D vision and haptic feedback in combination in a virtual reality environment leads to more efficient learning of laparoscopic skills in novices. Twenty novices were allocated to two groups. All completed a training course in the LapSim ® virtual reality trainer consisting of four tasks: 'instrument navigation', 'grasping', 'fine dissection' and 'suturing'. The study group performed with haptic feedback and 3D vision and the control group without. Before and after the LapSim ® course, the participants' metrics were recorded when tying a laparoscopic knot in the 2D video box trainer Simball ® Box. The study group completed the training course in 146 (100-291) minutes compared to 215 (175-489) minutes in the control group (p = .002). The number of attempts to reach proficiency was significantly lower. The study group had significantly faster learning of skills in three out of four individual tasks; instrument navigation, grasping and suturing. Using the Simball ® Box, no difference in laparoscopic knot tying after the LapSim ® course was noted when comparing the groups. Laparoscopic training in virtual reality with 3D vision and haptic feedback made training more time efficient and did not negatively affect later video box-performance in 2D. [Formula: see text].

  13. Real-Time Climate Simulations in the Interactive 3D Game Universe Sandbox ²

    Science.gov (United States)

    Goldenson, N. L.

    2014-12-01

    Exploration in an open-ended computer game is an engaging way to explore climate and climate change. Everyone can explore physical models with real-time visualization in the educational simulator Universe Sandbox ² (universesandbox.com/2), which includes basic climate simulations on planets. I have implemented a time-dependent, one-dimensional meridional heat transport energy balance model to run and be adjustable in real time in the midst of a larger simulated system. Universe Sandbox ² is based on the original game - at its core a gravity simulator - with other new physically-based content for stellar evolution, and handling collisions between bodies. Existing users are mostly science enthusiasts in informal settings. We believe that this is the first climate simulation to be implemented in a professionally developed computer game with modern 3D graphical output in real time. The type of simple climate model we've adopted helps us depict the seasonal cycle and the more drastic changes that come from changing the orbit or other external forcings. Users can alter the climate as the simulation is running by altering the star(s) in the simulation, dragging to change orbits and obliquity, adjusting the climate simulation parameters directly or changing other properties like CO2 concentration that affect the model parameters in representative ways. Ongoing visuals of the expansion and contraction of sea ice and snow-cover respond to the temperature calculations, and make it accessible to explore a variety of scenarios and intuitive to understand the output. Variables like temperature can also be graphed in real time. We balance computational constraints with the ability to capture the physical phenomena we wish to visualize, giving everyone access to a simple open-ended meridional energy balance climate simulation to explore and experiment with. The software lends itself to labs at a variety of levels about climate concepts including seasons, the Greenhouse effect

  14. APLIKASI 3D TERRAIN VIRTUAL RECREATION GARUDA WISNU KENCANA CULTURAL PARK

    Directory of Open Access Journals (Sweden)

    Gede Indra Raditya Martha

    2016-08-01

    Full Text Available Aplikasi 3D Terrain Garuda Wisnu Kencana Cultural Park (GWK atau GWK 3DVR adalah sebuah aplikasi virtual recreation yang merupakan salah satu cara tercepat untuk merampungkan proyek prestisius GWK secara virtual yang terhambat pembangunannya karena krisis moneter Indonesia di Tahun 1997. Aplikasi ini dibuat dengan menggabungkan objek 3 dimensi kedalam virtual environtment yang didesain agar menyerupai keadaan lapangan GWK berdasarkan masterplan 2014, digabungkan dengan wawancara langsung kepada pihak arsitektur GWK. Aplikasi GWK 3DVR merupakan aplikasi yang memerlukan spesifikasi perangkat keras yang cukup tinggi sehingga GWK 3DVR dilengkapi dengan fitur pengaturan kualitas grafis. Pengguna aplikasi seakan-akan berjalan di areal kompleks GWK dengan mengunakan tombol navigasi dan mode kamera first person yang terdapat pada aplikasi. Sensasi immersive dan realitas dapat dirasakan apabila pengoperasiannya disertai dengan pengunaan head mounted display yang kedepannya lebih mudah didapat. Hal tersebut dikarenakan virtual reality saat ini mulai berkembang cepat seiring dengan kepopulerannya pada bidang multimedia dan gaming. Walaupun hanya berbentuk virtual setidaknya aplikasi ini diharapkan dapat memvisualisasikan bentuk jadi dari GWK dan secara keseluruhan aplikasi telah mampu berjalan dengan baik serta menampilkan bentuk dan perkiraan tata letak juga tempat dari GWK yang saat ini belum rampung dengan bentuk virtual 3 dimensi. Kata kunci: Virtual recreation, first person point of view, Garuda Wisnu Kencana.

  15. Pixel multiplexing technique for real-time three-dimensional-imaging laser detection and ranging system using four linear-mode avalanche photodiodes

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Fan; Wang, Yuanqing, E-mail: yqwang@nju.edu.cn; Li, Fenfang [School of Electronic Science and Engineering, Nanjing University, Nanjing 210046 (China)

    2016-03-15

    The avalanche-photodiode-array (APD-array) laser detection and ranging (LADAR) system has been continually developed owing to its superiority of nonscanning, large field of view, high sensitivity, and high precision. However, how to achieve higher-efficient detection and better integration of the LADAR system for real-time three-dimensional (3D) imaging continues to be a problem. In this study, a novel LADAR system using four linear mode APDs (LmAPDs) is developed for high-efficient detection by adopting a modulation and multiplexing technique. Furthermore, an automatic control system for the array LADAR system is proposed and designed by applying the virtual instrumentation technique. The control system aims to achieve four functions: synchronization of laser emission and rotating platform, multi-channel synchronous data acquisition, real-time Ethernet upper monitoring, and real-time signal processing and 3D visualization. The structure and principle of the complete system are described in the paper. The experimental results demonstrate that the LADAR system is capable of achieving real-time 3D imaging on an omnidirectional rotating platform under the control of the virtual instrumentation system. The automatic imaging LADAR system utilized only 4 LmAPDs to achieve 256-pixel-per-frame detection with by employing 64-bit demodulator. Moreover, the lateral resolution is ∼15 cm and range accuracy is ∼4 cm root-mean-square error at a distance of ∼40 m.

  16. Experiencing 3D interactions in virtual reality and augmented reality

    NARCIS (Netherlands)

    Martens, J.B.; Qi, W.; Aliakseyeu, D.; Kok, A.J.F.; Liere, van R.; Hoven, van den E.; Ijsselsteijn, W.; Kortuem, G.; Laerhoven, van K.; McClelland, I.; Perik, E.; Romero, N.; Ruyter, de B.

    2004-01-01

    We demonstrate basic 2D and 3D interactions in both a Virtual Reality (VR) system, called the Personal Space Station, and an Augmented Reality (AR) system, called the Visual Interaction Platform. Since both platforms use identical (optical) tracking hardware and software, and can run identical

  17. Automatic real time drilling support on Ekofisk utilizing eDrilling

    Energy Technology Data Exchange (ETDEWEB)

    Rommetveit, Rolv; Bjorkevoll, Knut S.; Halsey, George W.; Kluge, Roald; Molde, Dag Ove; Odegard, Sven Inge [SINTEF Petroleum Research, Trondheim (Norway); Herbert, Mike [HITEC Products Drilling, Stavanger (Norway); ConocoPhillips Norge, Stavanger (Norway)

    2008-07-01

    eDrilling is a new and innovative system for real time drilling simulation, 3D visualization and control from a remote drilling expert centre. The concept uses all available real time drilling data (surface and downhole) in combination with real time modelling to monitor and optimize the drilling process. This information is used to visualize the wellbore in 3D in real time. eDrilling has been implemented in an Onshore Drilling Center in Norway. The system is composed of the following elements, some of which are unique and ground-breaking: an advanced and fast Integrated Drilling Simulator which is capable to model the different drilling sub-processes dynamically, and also the interaction between these sub-processes in real time; automatic quality check and corrections of drilling data; making them suitable for processing by computer models; real time supervision methodology for the drilling process using time based drilling data as well as drilling models / the integrated drilling simulator; methodology for diagnosis of the drilling state and conditions. This is obtained from comparing model predictions with measured data. Advisory technology for more optimal drilling. A Virtual Wellbore, with advanced visualization of the downhole process. Dat low and computer infrastructure. e-Drilling has been implemented in an Onshore Drilling Center on Ekofisk in Norway. The system is being used on drilling operations, and experiences from its use are presented. The supervision and diagnosis functionalities have been useful in particular, as the system has given early warnings on ECD and friction related problems. This paper will present the eDrilling system as well as experiences from its use. (author)

  18. SU-E-J-237: Real-Time 3D Anatomy Estimation From Undersampled MR Acquisitions

    Energy Technology Data Exchange (ETDEWEB)

    Glitzner, M; Lagendijk, J; Raaymakers, B; Crijns, S [University Medical Center Utrecht, Utrecht (Netherlands); Senneville, B Denis de [University Medical Center Utrecht, Utrecht (Netherlands); Mathematical Institute of Bordeaux, University of Bordeaux, Talence Cedex (France)

    2015-06-15

    Recent developments made MRI guided radiotherapy feasible. Performing simultaneous imaging during fractions can provide information about changing anatomy by means of deformable image registration for either immediate plan adaptations or accurate dose accumulation on the changing anatomy. In 3D MRI, however, acquisition time is considerable and scales with resolution. Furthermore, intra-scan motion degrades image quality.In this work, we investigate the sensitivity of registration quality on imageresolution: potentially, by employing spatial undersampling, the acquisition timeof MR images for the purpose of deformable image registration can be reducedsignificantly.On a volunteer, 3D-MR imaging data was sampled in a navigator-gated manner, acquiring one axial volume (360×260×100mm{sup 3}) per 3s during exhale phase. A T1-weighted FFE sequence was used with an acquired voxel size of (2.5mm{sup 3}) for a duration of 17min. Deformation vector fields were evaluated for 100 imaging cycles with respect to the initial anatomy using deformable image registration based on optical flow. Subsequently, the imaging data was downsampled by a factor of 2, simulating a fourfold acquisition speed. Displacements of the downsampled volumes were then calculated by the same process.In kidneyliver boundaries and the region around stomach/duodenum, prominent organ drifts could be observed in both the original and the downsampled imaging data. An increasing displacement of approximately 2mm was observed for the kidney, while an area around the stomach showed sudden displacements of 4mm. Comparison of the motile points over time showed high reproducibility between the displacements of high-resolution and downsampled volumes: over a 17min acquisition, the componentwise RMS error was not more than 0.38mm.Based on the synthetic experiments, 3D nonrigid image registration shows little sensitivity to image resolution and the displacement information is preserved even when halving the

  19. Professional Papervision3D

    CERN Document Server

    Lively, Michael

    2010-01-01

    Professional Papervision3D describes how Papervision3D works and how real world applications are built, with a clear look at essential topics such as building websites and games, creating virtual tours, and Adobe's Flash 10. Readers learn important techniques through hands-on applications, and build on those skills as the book progresses. The companion website contains all code examples, video step-by-step explanations, and a collada repository.

  20. An Innovative Direct-Interaction-Enabled Augmented-Reality 3D System

    Directory of Open Access Journals (Sweden)

    Sheng-Hsiung Chang

    2013-01-01

    Full Text Available Previous augmented-reality (AR applications have required users to observe the integration of real and virtual images on a display. This study proposes a novel concept regarding AR applications. By integrating AR techniques with marker identification, virtual-image output, imaging, and image-interaction processes, this study rendered virtual images that can interact with predefined markers in a real three-dimensional (3D environment.

  1. Visual appearance of a virtual upper limb modulates the temperature of the real hand: a thermal imaging study in Immersive Virtual Reality.

    Science.gov (United States)

    Tieri, Gaetano; Gioia, Annamaria; Scandola, Michele; Pavone, Enea F; Aglioti, Salvatore M

    2017-05-01

    To explore the link between Sense of Embodiment (SoE) over a virtual hand and physiological regulation of skin temperature, 24 healthy participants were immersed in virtual reality through a Head Mounted Display and had their real limb temperature recorded by means of a high-sensitivity infrared camera. Participants observed a virtual right upper limb (appearing either normally, or with the hand detached from the forearm) or limb-shaped non-corporeal control objects (continuous or discontinuous wooden blocks) from a first-person perspective. Subjective ratings of SoE were collected in each observation condition, as well as temperatures of the right and left hand, wrist and forearm. The observation of these complex, body and body-related virtual scenes resulted in increased real hand temperature when compared to a baseline condition in which a 3d virtual ball was presented. Crucially, observation of non-natural appearances of the virtual limb (discontinuous limb) and limb-shaped non-corporeal objects elicited high increase in real hand temperature and low SoE. In contrast, observation of the full virtual limb caused high SoE and low temperature changes in the real hand with respect to the other conditions. Interestingly, the temperature difference across the different conditions occurred according to a topographic rule that included both hands. Our study sheds new light on the role of an external hand's visual appearance and suggests a tight link between higher-order bodily self-representations and topographic regulation of skin temperature. © 2017 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  2. Real Students and Virtual Field Trips

    Science.gov (United States)

    de Paor, D. G.; Whitmeyer, S. J.; Bailey, J. E.; Schott, R. C.; Treves, R.; Scientific Team Of Www. Digitalplanet. Org

    2010-12-01

    Field trips have always been one of the major attractions of geoscience education, distinguishing courses in geology, geography, oceanography, etc., from laboratory-bound sciences such as nuclear physics or biochemistry. However, traditional field trips have been limited to regions with educationally useful exposures and to student populations with the necessary free time and financial resources. Two-year or commuter colleges serving worker-students cannot realistically insist on completion of field assignments and even well-endowed universities cannot take students to more than a handful of the best available field localities. Many instructors have attempted to bring the field into the classroom with the aid of technology. So-called Virtual Field Trips (VFTs) cannot replace the real experience for those that experience it but they are much better than nothing at all. We have been working to create transformative improvements in VFTs using four concepts: (i) self-drive virtual vehicles that students use to navigate the virtual globe under their own control; (ii) GigaPan outcrops that reveal successively more details views of key locations; (iii) virtual specimens scanned from real rocks, minerals, and fossils; and (iv) embedded assessment via logging of student actions. Students are represented by avatars of their own choosing and travel either together in a virtual field vehicle, or separately. When they approach virtual outcrops, virtual specimens become collectable and can be examined using Javascript controls that change magnification and orientation. These instructional resources are being made available via a new server under the domain name www.DigitalPlanet.org. The server will log student progress and provide immediate feedback. We aim to disseminate these resources widely and welcome feedback from instructors and students.

  3. 3D Technology Selection for a Virtual Learning Environment by Blending ISO 9126 Standard and AHP

    Science.gov (United States)

    Cetin, Aydin; Guler, Inan

    2011-01-01

    Web3D presents many opportunities for learners in a virtual world or virtual environment over the web. This is a great opportunity for open-distance education institutions to benefit from web3d technologies to create courses with interactive 3d materials. There are many open source and commercial products offering 3d technologies over the web…

  4. Implementation of virtual models from sheet metal forming simulation into physical 3D colour models using 3D printing

    Science.gov (United States)

    Junk, S.

    2016-08-01

    Today the methods of numerical simulation of sheet metal forming offer a great diversity of possibilities for optimization in product development and in process design. However, the results from simulation are only available as virtual models. Because there are any forming tools available during the early stages of product development, physical models that could serve to represent the virtual results are therefore lacking. Physical 3D-models can be created using 3D-printing and serve as an illustration and present a better understanding of the simulation results. In this way, the results from the simulation can be made more “comprehensible” within a development team. This paper presents the possibilities of 3D-colour printing with particular consideration of the requirements regarding the implementation of sheet metal forming simulation. Using concrete examples of sheet metal forming, the manufacturing of 3D colour models will be expounded upon on the basis of simulation results.

  5. Interactive virtual simulation using a 3D computer graphics model for microvascular decompression surgery.

    Science.gov (United States)

    Oishi, Makoto; Fukuda, Masafumi; Hiraishi, Tetsuya; Yajima, Naoki; Sato, Yosuke; Fujii, Yukihiko

    2012-09-01

    The purpose of this paper is to report on the authors' advanced presurgical interactive virtual simulation technique using a 3D computer graphics model for microvascular decompression (MVD) surgery. The authors performed interactive virtual simulation prior to surgery in 26 patients with trigeminal neuralgia or hemifacial spasm. The 3D computer graphics models for interactive virtual simulation were composed of the brainstem, cerebellum, cranial nerves, vessels, and skull individually created by the image analysis, including segmentation, surface rendering, and data fusion for data collected by 3-T MRI and 64-row multidetector CT systems. Interactive virtual simulation was performed by employing novel computer-aided design software with manipulation of a haptic device to imitate the surgical procedures of bone drilling and retraction of the cerebellum. The findings were compared with intraoperative findings. In all patients, interactive virtual simulation provided detailed and realistic surgical perspectives, of sufficient quality, representing the lateral suboccipital route. The causes of trigeminal neuralgia or hemifacial spasm determined by observing 3D computer graphics models were concordant with those identified intraoperatively in 25 (96%) of 26 patients, which was a significantly higher rate than the 73% concordance rate (concordance in 19 of 26 patients) obtained by review of 2D images only (p computer graphics model provided a realistic environment for performing virtual simulations prior to MVD surgery and enabled us to ascertain complex microsurgical anatomy.

  6. Pulsed cavitational ultrasound for non-invasive chordal cutting guided by real-time 3D echocardiography.

    Science.gov (United States)

    Villemain, Olivier; Kwiecinski, Wojciech; Bel, Alain; Robin, Justine; Bruneval, Patrick; Arnal, Bastien; Tanter, Mickael; Pernot, Mathieu; Messas, Emmanuel

    2016-10-01

    Basal chordae surgical section has been shown to be effective in reducing ischaemic mitral regurgitation (IMR). Achieving this section by non-invasive mean can considerably decrease the morbidity of this intervention on already infarcted myocardium. We investigated in vitro and in vivo the feasibility and safety of pulsed cavitational focused ultrasound (histotripsy) for non-invasive chordal cutting guided by real-time 3D echocardiography. Experiments were performed on 12 sheep hearts, 5 in vitro on explanted sheep hearts and 7 in vivo on beating sheep hearts. In vitro, the mitral valve (MV) apparatus including basal and marginal chordae was removed and fixed on a holder in a water tank. High-intensity ultrasound pulses were emitted from the therapeutic device (1-MHz focused transducer, pulses of 8 µs duration, peak negative pressure of 17 MPa, repetition frequency of 100 Hz), placed at a distance of 64 mm under 3D echocardiography guidance. In vivo, after sternotomy, the same therapeutic device was applied on the beating heart. We analysed MV coaptation and chordae by real-time 3D echocardiography before and after basal chordal cutting. After sacrifice, the MV apparatus were harvested for anatomical and histological post-mortem explorations to confirm the section of the chordae. In vitro, all chordae were completely cut after a mean procedure duration of 5.5 ± 2.5 min. The procedure duration was found to increase linearly with the chordae diameter. In vivo, the central basal chordae of the anterior leaflet were completely cut. The mean procedure duration was 20 ± 9 min (min = 14, max = 26). The sectioned chordae was visible on echocardiography, and MV coaptation remained normal with no significant mitral regurgitation. Anatomical and histological post-mortem explorations of the hearts confirmed the section of the chordae. Histotripsy guided by 3D echo achieved successfully to cut MV chordae in vitro and in vivo in beating heart. We hope that this technique will

  7. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D) and an...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.......Due to increased computational power, reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between a HRTF enhanced audio system (3D...

  8. Aurally Aided Visual Search Performance Comparing Virtual Audio Systems

    DEFF Research Database (Denmark)

    Larsen, Camilla Horne; Lauritsen, David Skødt; Larsen, Jacob Junker

    2014-01-01

    Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D) and an...... with white dots. The results indicate that 3D audio yields faster search latencies than panning audio, especially with larger amounts of distractors. The applications of this research could fit virtual environments such as video games or virtual simulations.......Due to increased computational power reproducing binaural hearing in real-time applications, through usage of head-related transfer functions (HRTFs), is now possible. This paper addresses the differences in aurally-aided visual search performance between an HRTF enhanced audio system (3D...

  9. Virtual rough samples to test 3D nanometer-scale scanning electron microscopy stereo photogrammetry.

    Science.gov (United States)

    Villarrubia, J S; Tondare, V N; Vladár, A E

    2016-01-01

    The combination of scanning electron microscopy for high spatial resolution, images from multiple angles to provide 3D information, and commercially available stereo photogrammetry software for 3D reconstruction offers promise for nanometer-scale dimensional metrology in 3D. A method is described to test 3D photogrammetry software by the use of virtual samples-mathematical samples from which simulated images are made for use as inputs to the software under test. The virtual sample is constructed by wrapping a rough skin with any desired power spectral density around a smooth near-trapezoidal line with rounded top corners. Reconstruction is performed with images simulated from different angular viewpoints. The software's reconstructed 3D model is then compared to the known geometry of the virtual sample. Three commercial photogrammetry software packages were tested. Two of them produced results for line height and width that were within close to 1 nm of the correct values. All of the packages exhibited some difficulty in reconstructing details of the surface roughness.

  10. Foreign Language Vocabulary Development through Activities in an Online 3D Environment

    Science.gov (United States)

    Milton, James; Jonsen, Sunniva; Hirst, Steven; Lindenburn, Sharn

    2012-01-01

    On-line virtual 3D worlds offer the opportunity for users to interact in real time with native speakers of the language they are learning. In principle, this ought to be of great benefit to learners, and mimicking the opportunity for immersion that real-life travel to a foreign country offers. We have very little research to show whether this is…

  11. Integration of the virtual 3D model of a control system with the virtual controller

    Science.gov (United States)

    Herbuś, K.; Ociepka, P.

    2015-11-01

    Nowadays the design process includes simulation analysis of different components of a constructed object. It involves the need for integration of different virtual object to simulate the whole investigated technical system. The paper presents the issues related to the integration of a virtual 3D model of a chosen control system of with a virtual controller. The goal of integration is to verify the operation of an adopted object of in accordance with the established control program. The object of the simulation work is the drive system of a tunneling machine for trenchless work. In the first stage of work was created an interactive visualization of functioning of the 3D virtual model of a tunneling machine. For this purpose, the software of the VR (Virtual Reality) class was applied. In the elaborated interactive application were created adequate procedures allowing controlling the drive system of a translatory motion, a rotary motion and the drive system of a manipulator. Additionally was created the procedure of turning on and off the output crushing head, mounted on the last element of the manipulator. In the elaborated interactive application have been established procedures for receiving input data from external software, on the basis of the dynamic data exchange (DDE), which allow controlling actuators of particular control systems of the considered machine. In the next stage of work, the program on a virtual driver, in the ladder diagram (LD) language, was created. The control program was developed on the basis of the adopted work cycle of the tunneling machine. The element integrating the virtual model of the tunneling machine for trenchless work with the virtual controller is the application written in a high level language (Visual Basic). In the developed application was created procedures responsible for collecting data from the running, in a simulation mode, virtual controller and transferring them to the interactive application, in which is verified the

  12. Secure environment for real-time tele-collaboration on virtual simulation of radiation treatment planning.

    Science.gov (United States)

    Ntasis, Efthymios; Maniatis, Theofanis A; Nikita, Konstantina S

    2003-01-01

    A secure framework is described for real-time tele-collaboration on Virtual Simulation procedure of Radiation Treatment Planning. An integrated approach is followed clustering the security issues faced by the system into organizational issues, security issues over the LAN and security issues over the LAN-to-LAN connection. The design and the implementation of the security services are performed according to the identified security requirements, along with the need for real time communication between the collaborating health care professionals. A detailed description of the implementation is given, presenting a solution, which can directly be tailored to other tele-collaboration services in the field of health care. The pilot study of the proposed security components proves the feasibility of the secure environment, and the consistency with the high performance demands of the application.

  13. Single minimum incision endoscopic radical nephrectomy for renal tumors with preoperative virtual navigation using 3D-CT volume-rendering

    Directory of Open Access Journals (Sweden)

    Shioyama Yasukazu

    2010-04-01

    Full Text Available Abstract Background Single minimum incision endoscopic surgery (MIES involves the use of a flexible high-definition laparoscope to facilitate open surgery. We reviewed our method of radical nephrectomy for renal tumors, which is single MIES combined with preoperative virtual surgery employing three-dimensional CT images reconstructed by the volume rendering method (3D-CT images in order to safely and appropriately approach the renal hilar vessels. We also assessed the usefulness of 3D-CT images. Methods Radical nephrectomy was done by single MIES via the translumbar approach in 80 consecutive patients. We performed the initial 20 MIES nephrectomies without preoperative 3D-CT images and the subsequent 60 MIES nephrectomies with preoperative 3D-CT images for evaluation of the renal hilar vessels and the relation of each tumor to the surrounding structures. On the basis of the 3D information, preoperative virtual surgery was performed with a computer. Results Single MIES nephrectomy was successful in all patients. In the 60 patients who underwent 3D-CT, the number of renal arteries and veins corresponded exactly with the preoperative 3D-CT data (100% sensitivity and 100% specificity. These 60 nephrectomies were completed with a shorter operating time and smaller blood loss than the initial 20 nephrectomies. Conclusions Single MIES radical nephrectomy combined with 3D-CT and virtual surgery achieved a shorter operating time and less blood loss, possibly due to safer and easier handling of the renal hilar vessels.

  14. 3D tumor localization through real-time volumetric x-ray imaging for lung cancer radiotherapy.

    Science.gov (United States)

    Li, Ruijiang; Lewis, John H; Jia, Xun; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Song, William Y; Jiang, Steve B

    2011-05-01

    To evaluate an algorithm for real-time 3D tumor localization from a single x-ray projection image for lung cancer radiotherapy. Recently, we have developed an algorithm for reconstructing volumetric images and extracting 3D tumor motion information from a single x-ray projection [Li et al., Med. Phys. 37, 2822-2826 (2010)]. We have demonstrated its feasibility using a digital respiratory phantom with regular breathing patterns. In this work, we present a detailed description and a comprehensive evaluation of the improved algorithm. The algorithm was improved by incorporating respiratory motion prediction. The accuracy and efficiency of using this algorithm for 3D tumor localization were then evaluated on (1) a digital respiratory phantom, (2) a physical respiratory phantom, and (3) five lung cancer patients. These evaluation cases include both regular and irregular breathing patterns that are different from the training dataset. For the digital respiratory phantom with regular and irregular breathing, the average 3D tumor localization error is less than 1 mm which does not seem to be affected by amplitude change, period change, or baseline shift. On an NVIDIA Tesla C1060 graphic processing unit (GPU) card, the average computation time for 3D tumor localization from each projection ranges between 0.19 and 0.26 s, for both regular and irregular breathing, which is about a 10% improvement over previously reported results. For the physical respiratory phantom, an average tumor localization error below 1 mm was achieved with an average computation time of 0.13 and 0.16 s on the same graphic processing unit (GPU) card, for regular and irregular breathing, respectively. For the five lung cancer patients, the average tumor localization error is below 2 mm in both the axial and tangential directions. The average computation time on the same GPU card ranges between 0.26 and 0.34 s. Through a comprehensive evaluation of our algorithm, we have established its accuracy in 3D

  15. Real-Time and High-Resolution 3D Face Measurement via a Smart Active Optical Sensor.

    Science.gov (United States)

    You, Yong; Shen, Yang; Zhang, Guocai; Xing, Xiuwen

    2017-03-31

    The 3D measuring range and accuracy in traditional active optical sensing, such as Fourier transform profilometry, are influenced by the zero frequency of the captured patterns. The phase-shifting technique is commonly applied to remove the zero component. However, this phase-shifting method must capture several fringe patterns with phase difference, thereby influencing the real-time performance. This study introduces a smart active optical sensor, in which a composite pattern is utilized. The composite pattern efficiently combines several phase-shifting fringes and carrier frequencies. The method can remove zero frequency by using only one pattern. Model face reconstruction and human face measurement were employed to study the validity and feasibility of this method. Results show no distinct decrease in the precision of the novel method unlike the traditional phase-shifting method. The texture mapping technique was utilized to reconstruct a nature-appearance 3D digital face.

  16. SU-C-201-04: Noise and Temporal Resolution in a Near Real-Time 3D Dosimeter

    Energy Technology Data Exchange (ETDEWEB)

    Rilling, M [Department of physics, engineering physics and optics, Universite Laval, Quebec City, QC (Canada); Centre de recherche sur le cancer, Universite Laval, Quebec City, QC (Canada); Radiation oncology department, CHU de Quebec, Quebec City, QC (Canada); Center for optics, photonics and lasers, Universite Laval, Quebec City, Quebec (Canada); Goulet, M [Radiation oncology department, CHU de Quebec, Quebec City, QC (Canada); Beaulieu, L; Archambault, L [Department of physics, engineering physics and optics, Universite Laval, Quebec City, QC (Canada); Centre de recherche sur le cancer, Universite Laval, Quebec City, QC (Canada); Radiation oncology department, CHU de Quebec, Quebec City, QC (Canada); Thibault, S [Center for optics, photonics and lasers, Universite Laval, Quebec City, Quebec (Canada)

    2016-06-15

    Purpose: To characterize the performance of a real-time three-dimensional scintillation dosimeter in terms of signal-to-noise ratio (SNR) and temporal resolution of 3D dose measurements. This study quantifies its efficiency in measuring low dose levels characteristic of EBRT dynamic treatments, and in reproducing field profiles for varying multileaf collimator (MLC) speeds. Methods: The dosimeter prototype uses a plenoptic camera to acquire continuous images of the light field emitted by a 10×10×10 cm{sup 3} plastic scintillator. Using EPID acquisitions, ray tracing-based iterative tomographic algorithms allow millimeter-sized reconstruction of relative 3D dose distributions. Measurements were taken at 6MV, 400 MU/min with the scintillator centered at the isocenter, first receiving doses from 1.4 to 30.6 cGy. Dynamic measurements were then performed by closing half of the MLCs at speeds of 0.67 to 2.5 cm/s, at 0° and 90° collimator angles. A reference static half-field was obtained for measured profile comparison. Results: The SNR steadily increases as a function of dose and reaches a clinically adequate plateau of 80 at 10 cGy. Below this, the decrease in light collected and increase in pixel noise diminishes the SNR; nonetheless, the EPID acquisitions and the voxel correlation employed in the reconstruction algorithms result in suitable SNR values (>75) even at low doses. For dynamic measurements at varying MLC speeds, central relative dose profiles are characterized by gradients at %D{sub 50} of 8.48 to 22.7 %/mm. These values converge towards the 32.8 %/mm-gradient measured for the static reference field profile, but are limited by the dosimeter’s current acquisition rate of 1Hz. Conclusion: This study emphasizes the efficiency of the 3D dose distribution reconstructions, while identifying limits of the current prototype’s temporal resolution in terms of dynamic EBRT parameters. This work paves the way for providing an optimized, second

  17. SU-C-201-04: Noise and Temporal Resolution in a Near Real-Time 3D Dosimeter

    International Nuclear Information System (INIS)

    Rilling, M; Goulet, M; Beaulieu, L; Archambault, L; Thibault, S

    2016-01-01

    Purpose: To characterize the performance of a real-time three-dimensional scintillation dosimeter in terms of signal-to-noise ratio (SNR) and temporal resolution of 3D dose measurements. This study quantifies its efficiency in measuring low dose levels characteristic of EBRT dynamic treatments, and in reproducing field profiles for varying multileaf collimator (MLC) speeds. Methods: The dosimeter prototype uses a plenoptic camera to acquire continuous images of the light field emitted by a 10×10×10 cm"3 plastic scintillator. Using EPID acquisitions, ray tracing-based iterative tomographic algorithms allow millimeter-sized reconstruction of relative 3D dose distributions. Measurements were taken at 6MV, 400 MU/min with the scintillator centered at the isocenter, first receiving doses from 1.4 to 30.6 cGy. Dynamic measurements were then performed by closing half of the MLCs at speeds of 0.67 to 2.5 cm/s, at 0° and 90° collimator angles. A reference static half-field was obtained for measured profile comparison. Results: The SNR steadily increases as a function of dose and reaches a clinically adequate plateau of 80 at 10 cGy. Below this, the decrease in light collected and increase in pixel noise diminishes the SNR; nonetheless, the EPID acquisitions and the voxel correlation employed in the reconstruction algorithms result in suitable SNR values (>75) even at low doses. For dynamic measurements at varying MLC speeds, central relative dose profiles are characterized by gradients at %D_5_0 of 8.48 to 22.7 %/mm. These values converge towards the 32.8 %/mm-gradient measured for the static reference field profile, but are limited by the dosimeter’s current acquisition rate of 1Hz. Conclusion: This study emphasizes the efficiency of the 3D dose distribution reconstructions, while identifying limits of the current prototype’s temporal resolution in terms of dynamic EBRT parameters. This work paves the way for providing an optimized, second-generational real-time 3D

  18. Evaluation of Real-Time and Off-Line Performance of the Virtual Seismologist Earthquake Early Warning Algorithm in Switzerland

    Science.gov (United States)

    Behr, Yannik; Clinton, John; Cua, Georgia; Cauzzi, Carlo; Heimers, Stefan; Kästli, Philipp; Becker, Jan; Heaton, Thomas

    2013-04-01

    The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS installations in southern Italy, western Greece, Istanbul, Romania, and Iceland are planned or underway. In Switzerland, VS has been running in real-time on stations monitored by the Swiss Seismological Service (including stations from Austria, France, Germany, and Italy) since 2010. While originally based on the Earthworm system it has recently been ported to the SeisComp3 system. Besides taking advantage of SeisComp3's picking and phase association capabilities it greatly simplifies the potential installation of VS at networks in particular those already running SeisComp3. We present the architecture of the new SeisComp3 based version and compare its results from off-line tests with the real-time performance of VS in Switzerland over the past two years. We further show that the empirical relationships used by VS to estimate magnitudes and ground motion, originally derived from southern California data, perform well in Switzerland.

  19. M3D (Media 3D): a new programming language for web-based virtual reality in E-Learning and Edutainment

    Science.gov (United States)

    Chakaveh, Sepideh; Skaley, Detlef; Laine, Patricia; Haeger, Ralf; Maad, Soha

    2003-01-01

    Today, interactive multimedia educational systems are well established, as they prove useful instruments to enhance one's learning capabilities. Hitherto, the main difficulty with almost all E-Learning systems was latent in the rich media implementation techniques. This meant that each and every system should be created individually as reapplying the media, be it only a part, or the whole content was not directly possible, as everything must be applied mechanically i.e. by hand. Consequently making E-learning systems exceedingly expensive to generate, both in time and money terms. Media-3D or M3D is a new platform independent programming language, developed at the Fraunhofer Institute Media Communication to enable visualisation and simulation of E-Learning multimedia content. M3D is an XML-based language, which is capable of distinguishing between the3D models from that of the 3D scenes, as well as handling provisions for animations, within the programme. Here we give a technical account of M3D programming language and briefly describe two specific application scenarios where M3D is applied to create virtual reality E-Learning content for training of technical personnel.

  20. 2D to 3D conversion implemented in different hardware

    Science.gov (United States)

    Ramos-Diaz, Eduardo; Gonzalez-Huitron, Victor; Ponomaryov, Volodymyr I.; Hernandez-Fragoso, Araceli

    2015-02-01

    Conversion of available 2D data for release in 3D content is a hot topic for providers and for success of the 3D applications, in general. It naturally completely relies on virtual view synthesis of a second view given by original 2D video. Disparity map (DM) estimation is a central task in 3D generation but still follows a very difficult problem for rendering novel images precisely. There exist different approaches in DM reconstruction, among them manually and semiautomatic methods that can produce high quality DMs but they demonstrate hard time consuming and are computationally expensive. In this paper, several hardware implementations of designed frameworks for an automatic 3D color video generation based on 2D real video sequence are proposed. The novel framework includes simultaneous processing of stereo pairs using the following blocks: CIE L*a*b* color space conversions, stereo matching via pyramidal scheme, color segmentation by k-means on an a*b* color plane, and adaptive post-filtering, DM estimation using stereo matching between left and right images (or neighboring frames in a video), adaptive post-filtering, and finally, the anaglyph 3D scene generation. Novel technique has been implemented on DSP TMS320DM648, Matlab's Simulink module over a PC with Windows 7, and using graphic card (NVIDIA Quadro K2000) demonstrating that the proposed approach can be applied in real-time processing mode. The time values needed, mean Similarity Structural Index Measure (SSIM) and Bad Matching Pixels (B) values for different hardware implementations (GPU, Single CPU, and DSP) are exposed in this paper.

  1. 3-D Sound for Virtual Reality and Multimedia

    Science.gov (United States)

    Begault, Durand R.; Trejo, Leonard J. (Technical Monitor)

    2000-01-01

    Technology and applications for the rendering of virtual acoustic spaces are reviewed. Chapter 1 deals with acoustics and psychoacoustics. Chapters 2 and 3 cover cues to spatial hearing and review psychoacoustic literature. Chapter 4 covers signal processing and systems overviews of 3-D sound systems. Chapter 5 covers applications to computer workstations, communication systems, aeronautics and space, and sonic arts. Chapter 6 lists resources. This TM is a reprint of the 1994 book from Academic Press.

  2. Distance Learning for Students with Special Needs through 3D Virtual Learning

    Science.gov (United States)

    Laffey, James M.; Stichter, Janine; Galyen, Krista

    2014-01-01

    iSocial is a 3D Virtual Learning Environment (3D VLE) to develop social competency for students who have been identified with High-Functioning Autism Spectrum Disorders. The motivation for developing a 3D VLE is to improve access to special needs curriculum for students who live in rural or small school districts. The paper first describes a…

  3. Evaluation of Real-Time Performance of the Virtual Seismologist Earthquake Early Warning Algorithm in Switzerland and California

    Science.gov (United States)

    Behr, Y.; Cua, G. B.; Clinton, J. F.; Heaton, T. H.

    2012-12-01

    The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms - the other two being ElarmS (Allen and Kanamori, 2003) and On-Site (Wu and Kanamori, 2005; Boese et al., 2008) algorithms - that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS will be installed and tested at other European networks. VS has been running in real-time on stations of the Southern California Seismic Network (SCSN) since July 2008, and on stations of the Berkeley Digital Seismic Network (BDSN) and the USGS Menlo Park strong motion network in northern California since February 2009. In Switzerland, VS has been running in real-time on stations monitored by the Swiss Seismological Service (including stations from Austria, France, Germany, and Italy) since 2010. We present summaries of the real-time performance of VS in Switzerland and California over the past two and three years respectively. The empirical relationships used by VS to estimate magnitudes and ground motion, originally derived from southern California data, are demonstrated to perform well in northern California and Switzerland. Implementation in real-time and off-line testing in Europe will potentially be extended to southern Italy, western Greece, Istanbul, Romania, and Iceland. Integration of the VS algorithm into both the CISN Advanced

  4. Elderly Healthcare Monitoring Using an Avatar-Based 3D Virtual Environment

    Directory of Open Access Journals (Sweden)

    Matti Pouke

    2013-12-01

    Full Text Available Homecare systems for elderly people are becoming increasingly important due to both economic reasons as well as patients’ preferences. Sensor-based surveillance technologies are an expected future trend, but research so far has devoted little attention to the User Interface (UI design of such systems and the user-centric design approach. In this paper, we explore the possibilities of an avatar-based 3D visualization system, which exploits wearable sensors and human activity simulations. We present a technical prototype and the evaluation of alternative concept designs for UIs based on a 3D virtual world. The evaluation was conducted with homecare providers through focus groups and an online survey. Our results show firstly that systems taking advantage of 3D virtual world visualization techniques have potential especially due to the privacy preserving and simplified information presentation style, and secondly that simple representations and glancability should be emphasized in the design. The identified key use cases highlight that avatar-based 3D presentations can be helpful if they provide an overview as well as details on demand.

  5. An inkjet-printed buoyant 3-D lagrangian sensor for real-time flood monitoring

    KAUST Repository

    Farooqui, Muhammad Fahad

    2014-06-01

    A 3-D (cube-shaped) Lagrangian sensor, inkjet printed on a paper substrate, is presented for the first time. The sensor comprises a transmitter chip with a microcontroller completely embedded in the cube, along with a $1.5 \\\\lambda 0 dipole that is uniquely implemented on all the faces of the cube to achieve a near isotropic radiation pattern. The sensor has been designed to operate both in the air as well as water (half immersed) for real-time flood monitoring. The sensor weighs 1.8 gm and measures 13 mm$\\\\,\\\\times\\\\,$ 13 mm$\\\\,\\\\times\\\\,$ 13 mm, and each side of the cube corresponds to only $0.1 \\\\lambda 0 (at 2.4 GHz). The printed circuit board is also inkjet-printed on paper substrate to make the sensor light weight and buoyant. Issues related to the bending of inkjet-printed tracks and integration of the transmitter chip in the cube are discussed. The Lagrangian sensor is designed to operate in a wireless sensor network and field tests have confirmed that it can communicate up to a distance of 100 m while in the air and up to 50 m while half immersed in water. © 1963-2012 IEEE.

  6. Novel System for Real-Time Integration of 3-D Echocardiography and Fluoroscopy for Image-Guided Cardiac Interventions: Preclinical Validation and Clinical Feasibility Evaluation

    Science.gov (United States)

    Housden, R. James; Ma, Yingliang; Rajani, Ronak; Gao, Gang; Nijhof, Niels; Cathier, Pascal; Bullens, Roland; Gijsbers, Geert; Parish, Victoria; Kapetanakis, Stamatis; Hancock, Jane; Rinaldi, C. Aldo; Cooklin, Michael; Gill, Jaswinder; Thomas, Martyn; O'neill, Mark D.; Razavi, Reza; Rhode, Kawal S.

    2014-01-01

    Real-time imaging is required to guide minimally invasive catheter-based cardiac interventions. While transesophageal echocardiography allows for high-quality visualization of cardiac anatomy, X-ray fluoroscopy provides excellent visualization of devices. We have developed a novel image fusion system that allows real-time integration of 3-D echocardiography and the X-ray fluoroscopy. The system was validated in the following two stages: 1) preclinical to determine function and validate accuracy; and 2) in the clinical setting to assess clinical workflow feasibility and determine overall system accuracy. In the preclinical phase, the system was assessed using both phantom and porcine experimental studies. Median 2-D projection errors of 4.5 and 3.3 mm were found for the phantom and porcine studies, respectively. The clinical phase focused on extending the use of the system to interventions in patients undergoing either atrial fibrillation catheter ablation (CA) or transcatheter aortic valve implantation (TAVI). Eleven patients were studied with nine in the CA group and two in the TAVI group. Successful real-time view synchronization was achieved in all cases with a calculated median distance error of 2.2 mm in the CA group and 3.4 mm in the TAVI group. A standard clinical workflow was established using the image fusion system. These pilot data confirm the technical feasibility of accurate real-time echo-fluoroscopic image overlay in clinical practice, which may be a useful adjunct for real-time guidance during interventional cardiac procedures. PMID:27170872

  7. High-precision real-time 3D shape measurement based on a quad-camera system

    Science.gov (United States)

    Tao, Tianyang; Chen, Qian; Feng, Shijie; Hu, Yan; Zhang, Minliang; Zuo, Chao

    2018-01-01

    Phase-shifting profilometry (PSP) based 3D shape measurement is well established in various applications due to its high accuracy, simple implementation, and robustness to environmental illumination and surface texture. In PSP, higher depth resolution generally requires higher fringe density of projected patterns which, in turn, lead to severe phase ambiguities that must be solved with additional information from phase coding and/or geometric constraints. However, in order to guarantee the reliability of phase unwrapping, available techniques are usually accompanied by increased number of patterns, reduced amplitude of fringe, and complicated post-processing algorithms. In this work, we demonstrate that by using a quad-camera multi-view fringe projection system and carefully arranging the relative spatial positions between the cameras and the projector, it becomes possible to completely eliminate the phase ambiguities in conventional three-step PSP patterns with high-fringe-density without projecting any additional patterns or embedding any auxiliary signals. Benefiting from the position-optimized quad-camera system, stereo phase unwrapping can be efficiently and reliably performed by flexible phase consistency checks. Besides, redundant information of multiple phase consistency checks is fully used through a weighted phase difference scheme to further enhance the reliability of phase unwrapping. This paper explains the 3D measurement principle and the basic design of quad-camera system, and finally demonstrates that in a large measurement volume of 200 mm × 200 mm × 400 mm, the resultant dynamic 3D sensing system can realize real-time 3D reconstruction at 60 frames per second with a depth precision of 50 μm.

  8. A real-time vision-based hand gesture interaction system for virtual EAST

    International Nuclear Information System (INIS)

    Wang, K.R.; Xiao, B.J.; Xia, J.Y.; Li, Dan; Luo, W.L.

    2016-01-01

    Highlights: • Hand gesture interaction is first introduced to EAST model interaction. • We can interact with EAST model by a bared hand and a web camera. • We can interact with EAST model with a distance to screen. • Interaction is free, direct and effective. - Abstract: The virtual Experimental Advanced Superconducting Tokamak device (VEAST) is a very complicated 3D model, to interact with which, the traditional interaction devices are limited and inefficient. However, with the development of human-computer interaction (HCI), the hand gesture interaction has become a much popular choice in recent years. In this paper, we propose a real-time vision-based hand gesture interaction system for VEAST. By using one web camera, we can use our bare hand to interact with VEAST at a certain distance, which proves to be more efficient and direct than mouse. The system is composed of four modules: initialization, hand gesture recognition, interaction control and system settings. The hand gesture recognition method is based on codebook (CB) background modeling and open finger counting. Firstly, we build a background model with CB algorithm. Then, we segment the hand region by detecting skin color regions with “elliptical boundary model” in CbCr flat of YCbCr color space. Open finger which is used as a key feature of gesture can be tracked by an improved curvature-based method. Based on the method, we define nine gestures for interaction control of VEAST. Finally, we design a test to demonstrate effectiveness of our system.

  9. A real-time vision-based hand gesture interaction system for virtual EAST

    Energy Technology Data Exchange (ETDEWEB)

    Wang, K.R., E-mail: wangkr@mail.ustc.edu.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Xiao, B.J.; Xia, J.Y. [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); University of Science and Technology of China, Hefei, Anhui (China); Li, Dan [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei, Anhui (China); Luo, W.L. [709th Research Institute, Shipbuilding Industry Corporation (China)

    2016-11-15

    Highlights: • Hand gesture interaction is first introduced to EAST model interaction. • We can interact with EAST model by a bared hand and a web camera. • We can interact with EAST model with a distance to screen. • Interaction is free, direct and effective. - Abstract: The virtual Experimental Advanced Superconducting Tokamak device (VEAST) is a very complicated 3D model, to interact with which, the traditional interaction devices are limited and inefficient. However, with the development of human-computer interaction (HCI), the hand gesture interaction has become a much popular choice in recent years. In this paper, we propose a real-time vision-based hand gesture interaction system for VEAST. By using one web camera, we can use our bare hand to interact with VEAST at a certain distance, which proves to be more efficient and direct than mouse. The system is composed of four modules: initialization, hand gesture recognition, interaction control and system settings. The hand gesture recognition method is based on codebook (CB) background modeling and open finger counting. Firstly, we build a background model with CB algorithm. Then, we segment the hand region by detecting skin color regions with “elliptical boundary model” in CbCr flat of YCbCr color space. Open finger which is used as a key feature of gesture can be tracked by an improved curvature-based method. Based on the method, we define nine gestures for interaction control of VEAST. Finally, we design a test to demonstrate effectiveness of our system.

  10. Real-time intensity based 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy

    Science.gov (United States)

    Furtado, H.; Steiner, E.; Stock, M.; Georg, D.; Birkfellner, W.

    2014-03-01

    Intra-fractional respiratorymotion during radiotherapy is one of themain sources of uncertainty in dose application creating the need to extend themargins of the planning target volume (PTV). Real-time tumormotion tracking by 2D/3D registration using on-board kilo-voltage (kV) imaging can lead to a reduction of the PTV. One limitation of this technique when using one projection image, is the inability to resolve motion along the imaging beam axis. We present a retrospective patient study to investigate the impact of paired portal mega-voltage (MV) and kV images, on registration accuracy. We used data from eighteen patients suffering from non small cell lung cancer undergoing regular treatment at our center. For each patient we acquired a planning CT and sequences of kV and MV images during treatment. Our evaluation consisted of comparing the accuracy of motion tracking in 6 degrees-of-freedom(DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. We use graphics processing unit rendering for real-time performance. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 3.3 mm to 1.8 mm and the motion along AP was successfully extracted. The mean registration time was of 190+/-35ms. Our evaluation shows that using kVMV image pairs leads to improved motion extraction in 6 DOF. Therefore, this approach is suitable for accurate, real-time tumor motion tracking with a conventional LINAC.

  11. Avatar-mediation and transformation of practice in a 3D virtual world

    DEFF Research Database (Denmark)

    Riis, Marianne

    2016-01-01

    The purpose of this study is to understand and conceptualize the transformation of a particular community of pedagogical practice based on the implementation of the 3D virtual world, Second Life™. The community setting is a course at the Danish online postgraduate Master's programme on ICT...... and Learning, which is formally situated at Aalborg University. The study is guided by two research questions focusing on the participants' responses to the avatar phenomenon and the design of the course. In order to conduct and theorize about the transformation of this community of practice due to the 3D....... In summary, the study contributes with knowledge about 3D Virtual Worlds, the influence of the avatar phenomenon and the consequences of 3D-remediation in relation to teaching and learning in online education. Based on the findings, a conceptual design model, a set of design principles, and a design...

  12. Spatiotemporal Segmentation and Modeling of the Mitral Valve in Real-Time 3D Echocardiographic Images.

    Science.gov (United States)

    Pouch, Alison M; Aly, Ahmed H; Lai, Eric K; Yushkevich, Natalie; Stoffers, Rutger H; Gorman, Joseph H; Cheung, Albert T; Gorman, Joseph H; Gorman, Robert C; Yushkevich, Paul A

    2017-09-01

    Transesophageal echocardiography is the primary imaging modality for preoperative assessment of mitral valves with ischemic mitral regurgitation (IMR). While there are well known echocardiographic insights into the 3D morphology of mitral valves with IMR, such as annular dilation and leaflet tethering, less is understood about how quantification of valve dynamics can inform surgical treatment of IMR or predict short-term recurrence of the disease. As a step towards filling this knowledge gap, we present a novel framework for 4D segmentation and geometric modeling of the mitral valve in real-time 3D echocardiography (rt-3DE). The framework integrates multi-atlas label fusion and template-based medial modeling to generate quantitatively descriptive models of valve dynamics. The novelty of this work is that temporal consistency in the rt-3DE segmentations is enforced during both the segmentation and modeling stages with the use of groupwise label fusion and Kalman filtering. The algorithm is evaluated on rt-3DE data series from 10 patients: five with normal mitral valve morphology and five with severe IMR. In these 10 data series that total 207 individual 3DE images, each 3DE segmentation is validated against manual tracing and temporal consistency between segmentations is demonstrated. The ultimate goal is to generate accurate and consistent representations of valve dynamics that can both visually and quantitatively provide insight into normal and pathological valve function.

  13. Automated real-time search and analysis algorithms for a non-contact 3D profiling system

    Science.gov (United States)

    Haynes, Mark; Wu, Chih-Hang John; Beck, B. Terry; Peterman, Robert J.

    2013-04-01

    The purpose of this research is to develop a new means of identifying and extracting geometrical feature statistics from a non-contact precision-measurement 3D profilometer. Autonomous algorithms have been developed to search through large-scale Cartesian point clouds to identify and extract geometrical features. These algorithms are developed with the intent of providing real-time production quality control of cold-rolled steel wires. The steel wires in question are prestressing steel reinforcement wires for concrete members. The geometry of the wire is critical in the performance of the overall concrete structure. For this research a custom 3D non-contact profilometry system has been developed that utilizes laser displacement sensors for submicron resolution surface profiling. Optimizations in the control and sensory system allow for data points to be collected at up to an approximate 400,000 points per second. In order to achieve geometrical feature extraction and tolerancing with this large volume of data, the algorithms employed are optimized for parsing large data quantities. The methods used provide a unique means of maintaining high resolution data of the surface profiles while keeping algorithm running times within practical bounds for industrial application. By a combination of regional sampling, iterative search, spatial filtering, frequency filtering, spatial clustering, and template matching a robust feature identification method has been developed. These algorithms provide an autonomous means of verifying tolerances in geometrical features. The key method of identifying the features is through a combination of downhill simplex and geometrical feature templates. By performing downhill simplex through several procedural programming layers of different search and filtering techniques, very specific geometrical features can be identified within the point cloud and analyzed for proper tolerancing. Being able to perform this quality control in real time

  14. A Collaborative Virtual Environment for Situated Language Learning Using VEC3D

    Science.gov (United States)

    Shih, Ya-Chun; Yang, Mau-Tsuen

    2008-01-01

    A 3D virtually synchronous communication architecture for situated language learning has been designed to foster communicative competence among undergraduate students who have studied English as a foreign language (EFL). We present an innovative approach that offers better e-learning than the previous virtual reality educational applications. The…

  15. 2.5D real waveform and real noise simulation of receiver functions in 3D models

    Science.gov (United States)

    Schiffer, Christian; Jacobsen, Bo; Balling, Niels

    2014-05-01

    There are several reasons why a real-data receiver function differs from the theoretical receiver function in a 1D model representing the stratification under the seismometer. Main reasons are ambient noise, spectral deficiencies in the impinging P-waveform, and wavefield propagation in laterally varying velocity variations. We present a rapid "2.5D" modelling approach which takes these aspects into account, so that a given 3D velocity model of the crust and uppermost mantle can be tested more realistically against observed recordings from seismometer arrays. Each recorded event at each seismometer is simulated individually through the following steps: A 2D section is extracted from the 3D model along the direction towards the hypocentre. A properly slanted plane or curved impulsive wavefront is propagated through this 2D section, resulting in noise free and spectrally complete synthetic seismometer data. The real vertical component signal is taken as a proxy of the real impingent wavefield, so by convolution and subsequent addition of real ambient noise recorded just before the P-arrival we get synthetic vertical and horizontal component data which very closely match the spectral signal content and signal to noise ratio of this specific recording. When these realistic synthetic data undergo exactly the same receiver function estimation and subsequent graphical display we get a much more realistic image to compare to the real-data receiver functions. We applied this approach to the Central Fjord area in East Greenland (Schiffer et al., 2013), where a 3D velocity model of crust and uppermost mantle was adjusted to receiver functions from 2 years of seismometer recordings and wide angle crustal profiles (Schlindwein and Jokat, 1999; Voss and Jokat, 2007). Computationally this substitutes tens or hundreds of heavy 3D computations with hundreds or thousands of single-core 2D computations which parallelize very efficiently on common multicore systems. In perspective

  16. Virtual endoscopic images by 3D FASE cisternography for neurovascular compression

    International Nuclear Information System (INIS)

    Ishimori, Takashi; Nakano, Satoru; Kagawa, Masahiro

    2003-01-01

    Three-dimensional fast asymmetric spin echo (3D FASE) cisternography provides high spatial resolution and excellent contrast as a water image acquisition technique. It is also useful for the evaluation of various anatomical regions. This study investigated the usefulness and limitations of virtual endoscopic images obtained by 3D FASE MR cisternography in the preoperative evaluation of patients with neurovascular compression. The study included 12 patients with neurovascular compression: 10 with hemifacial spasm and two with trigeminal neuralgia. The diagnosis was surgically confirmed in all patients. The virtual endoscopic images obtained were judged to be of acceptable quality for interpretation in all cases. The areas of compression identified in preoperative diagnosis with virtual endoscopic images showed good agreement with those observed from surgery, except in one case in which the common trunk of the anterior inferior cerebellar artery and posterior inferior cerebellar artery (AICA-PICA) bifurcated near the root exit zone of the facial nerve. The veins are displayed in some cases but not in others. The main advantage of generating virtual endoscopic images is that such images can be used for surgical simulation, allowing the neurosurgeon to perform surgical procedures with greater confidence. (author)

  17. Some "Real" Problems of "Virtual" Organisation.

    Science.gov (United States)

    Hughes, John A.; O'Brien, Jon; Randall, Dave; Rouncefield, Mark; Tolmie, Peter

    2001-01-01

    An ethnographic study of organizational change in a bank considered issues surrounding virtual teamwork in virtual organizations. Problems in communication, management control, and approach to customer service were found. An underlying cause is that "virtual" work involves "real" customers, workers, and problems. (Contains 36 references.) (SK)

  18. Interaksi pada Museum Virtual Menggunakan Pengindera Tangan dengan Penyajian Stereoscopic 3D

    Directory of Open Access Journals (Sweden)

    Gary Almas Samaita

    2017-01-01

    Full Text Available Kemajuan teknologi menjadikan museum mengembangkan cara penyajian koleksinya. Salah satu teknologi yang diadaptasi dalam penyajian museum virtual adalah Virtual Reality (VR dengan stereoscopic 3D. Sayangnya, museum virtual dengan teknik penyajian stereoscopic masih menggunakan keyboard dan mouse sebagai perangkat interaksi. Penelitian ini bertujuan untuk merancang dan menerapkan interaksi dengan pengindera tangan pada museum virtual dengan penyajian stereoscopic 3D. Museum virtual divisualisasikan dengan teknik stereoscopic side-by-side melalui Head Mounting Display (HMD berbasis Android. HMD juga memiliki fungsi head tracking dengan membaca orientasi kepala. Interaksi tangan diterapkan dengan menggunakan pengindera tangan yang ditempatkan pada HMD. Karena pengindera tangan tidak didukung oleh HMD berbasis Android, maka digunakan server sebagai perantara HMD dan pengindera tangan. Setelah melalui pengujian, diketahui bahwa rata-rata confidence rate dari pembacaan pengindera tangan pada pola tangan untuk memicu interaksi adalah sebesar 99,92% dengan rata-rata efektifitas 92,61%. Uji ketergunaan juga dilakukan dengan pendasaran ISO/IEC 9126-4 untuk mengukur efektifitas, efisiensi, dan kepuasan pengguna dari sistem yang dirancang dengan meminta partisipan untuk melakukan 9 tugas yang mewakili interaksi tangan dalam museum virtual. Hasil pengujian menunjukkan bahwa semua pola tangan yang dirancang dapat dilakukan oleh partisipan meskipun pola tangan dinilai cukup sulit dilakukan. Melalui kuisioner diketahui bahwa total 86,67% partisipan setuju bahwa interaksi tangan memberikan pengalaman baru dalam menikmati museum virtual.

  19. Between the Real and the Virtual: 3D visualization in the Cultural Heritage domain - expectations and prospects

    Directory of Open Access Journals (Sweden)

    Sorin Hermon

    2011-05-01

    Full Text Available The paper discusses two uses of 3D Visualization and Virtual Reality (hereafter VR of Cultural Heritage (CH assets: a less used one, in the archaeological / historical research and a more frequent one, as a communication medium in CH museums. While technological effort has been mainly invested in improving the “accuracy” of VR (determined as how truthfully it reproduces the “CH reality”, issues related to scientific requirements, (data transparency, separation between “real” and “virtual”, etc., are largely neglected, or at least not directly related to the 3D outcome, which may explain why, after more than twenty years of producing VR models, they are still rarely used in the archaeological research. The paper will present a proposal for developing VR tools as such as to be meaningful CH research tools as well as a methodology for designing VR outcomes to be used as a communication medium in CH museums.

  20. Introduction to programmable shader in real time 3D computer graphics

    International Nuclear Information System (INIS)

    Uemura, Syuhei; Kirii, Keisuke; Matsumura, Makoto; Matsumoto, Kenichiro

    2004-01-01

    Nevertheless the visualization of large-scale data had played the important role which influences informational usefulness in the basic field of science, the high-end graphics system or the exclusive system needed to be used. On the other hand, in recent years, the progress speed of the capability of the video game console or the graphics board for PC has a remarkable thing reflecting the expansion tendency of TV game market in and outside the country. Especially, the ''programmable shader'' technology in which the several graphics chip maker has started implementation is the innovative technology which can also be called change of generation of real-time 3D graphics, and the scope of the visual expression technique has spread greatly. However, it cannot say that the development/use environment of software which used programmable shader are fully generalized, and the present condition is that the grope of the applied technology to overly the ultra high-speed/quality visualization of large-scale data is not prograssing. We provide the outline of programmable shader technology and consider the possibility of the application to large-scale data visualization. (author)

  1. Network dynamics with BrainX(3): a large-scale simulation of the human brain network with real-time interaction.

    Science.gov (United States)

    Arsiwalla, Xerxes D; Zucca, Riccardo; Betella, Alberto; Martinez, Enrique; Dalmazzo, David; Omedas, Pedro; Deco, Gustavo; Verschure, Paul F M J

    2015-01-01

    BrainX(3) is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX(3) in real-time by perturbing brain regions with transient stimulations to observe reverberating network activity, simulate lesion dynamics or implement network analysis functions from a library of graph theoretic measures. BrainX(3) can thus be used as a novel immersive platform for exploration and analysis of dynamical activity patterns in brain networks, both at rest or in a task-related state, for discovery of signaling pathways associated to brain function and/or dysfunction and as a tool for virtual neurosurgery. Our results demonstrate these functionalities and shed insight on the dynamics of the resting-state attractor. Specifically, we found that a noisy network seems to favor a low firing attractor state. We also found that the dynamics of a noisy network is less resilient to lesions. Our simulations on TMS perturbations show that even though TMS inhibits most of the network, it also sparsely excites a few regions. This is presumably due to anti-correlations in the dynamics and suggests that even a lesioned network can show sparsely distributed increased activity compared to healthy resting-state, over specific brain areas.

  2. Network dynamics with BrainX3: a large-scale simulation of the human brain network with real-time interaction

    Science.gov (United States)

    Arsiwalla, Xerxes D.; Zucca, Riccardo; Betella, Alberto; Martinez, Enrique; Dalmazzo, David; Omedas, Pedro; Deco, Gustavo; Verschure, Paul F. M. J.

    2015-01-01

    BrainX3 is a large-scale simulation of human brain activity with real-time interaction, rendered in 3D in a virtual reality environment, which combines computational power with human intuition for the exploration and analysis of complex dynamical networks. We ground this simulation on structural connectivity obtained from diffusion spectrum imaging data and model it on neuronal population dynamics. Users can interact with BrainX3 in real-time by perturbing brain regions with transient stimulations to observe reverberating network activity, simulate lesion dynamics or implement network analysis functions from a library of graph theoretic measures. BrainX3 can thus be used as a novel immersive platform for exploration and analysis of dynamical activity patterns in brain networks, both at rest or in a task-related state, for discovery of signaling pathways associated to brain function and/or dysfunction and as a tool for virtual neurosurgery. Our results demonstrate these functionalities and shed insight on the dynamics of the resting-state attractor. Specifically, we found that a noisy network seems to favor a low firing attractor state. We also found that the dynamics of a noisy network is less resilient to lesions. Our simulations on TMS perturbations show that even though TMS inhibits most of the network, it also sparsely excites a few regions. This is presumably due to anti-correlations in the dynamics and suggests that even a lesioned network can show sparsely distributed increased activity compared to healthy resting-state, over specific brain areas. PMID:25759649

  3. Visual simultaneous localization and mapping (VSLAM) methods applied to indoor 3D topographical and radiological mapping in real-time

    International Nuclear Information System (INIS)

    Hautot, F.; Dubart, P.; Chagneau, B.; Bacri, C.O.; Abou-Khalil, R.

    2017-01-01

    New developments in the field of robotics and computer vision enable to merge sensors to allow fast real-time localization of radiological measurements in the space/volume with near real-time radioactive sources identification and characterization. These capabilities lead nuclear investigations to a more efficient way for operators' dosimetry evaluation, intervention scenarios and risks mitigation and simulations, such as accidents in unknown potentially contaminated areas or during dismantling operations. This paper will present new progresses in merging RGB-D camera based on SLAM (Simultaneous Localization and Mapping) systems and nuclear measurement in motion methods in order to detect, locate, and evaluate the activity of radioactive sources in 3-dimensions

  4. When the display matters: A multifaceted perspective on 3D geovisualizations

    Directory of Open Access Journals (Sweden)

    Juřík Vojtěch

    2017-04-01

    Full Text Available This study explores the influence of stereoscopic (real 3D and monoscopic (pseudo 3D visualization on the human ability to reckon altitude information in noninteractive and interactive 3D geovisualizations. A two phased experiment was carried out to compare the performance of two groups of participants, one of them using the real 3D and the other one pseudo 3D visualization of geographical data. A homogeneous group of 61 psychology students, inexperienced in processing of geographical data, were tested with respect to their efficiency at identifying altitudes of the displayed landscape. The first phase of the experiment was designed as non-interactive, where static 3D visual displayswere presented; the second phase was designed as interactive and the participants were allowed to explore the scene by adjusting the position of the virtual camera. The investigated variables included accuracy at altitude identification, time demands and the amount of the participant’s motor activity performed during interaction with geovisualization. The interface was created using a Motion Capture system, Wii Remote Controller, widescreen projection and the passive Dolby 3D technology (for real 3D vision. The real 3D visual display was shown to significantly increase the accuracy of the landscape altitude identification in non-interactive tasks. As expected, in the interactive phase there were differences in accuracy flattened out between groups due to the possibility of interaction, with no other statistically significant differences in completion times or motor activity. The increased number of omitted objects in real 3D condition was further subjected to an exploratory analysis.

  5. Anesthesiology training using 3D imaging and virtual reality

    Science.gov (United States)

    Blezek, Daniel J.; Robb, Richard A.; Camp, Jon J.; Nauss, Lee A.

    1996-04-01

    Current training for regional nerve block procedures by anesthesiology residents requires expert supervision and the use of cadavers; both of which are relatively expensive commodities in today's cost-conscious medical environment. We are developing methods to augment and eventually replace these training procedures with real-time and realistic computer visualizations and manipulations of the anatomical structures involved in anesthesiology procedures, such as nerve plexus injections (e.g., celiac blocks). The initial work is focused on visualizations: both static images and rotational renderings. From the initial results, a coherent paradigm for virtual patient and scene representation will be developed.

  6. Real-time target tracking of soft tissues in 3D ultrasound images based on robust visual information and mechanical simulation.

    Science.gov (United States)

    Royer, Lucas; Krupa, Alexandre; Dardenne, Guillaume; Le Bras, Anthony; Marchand, Eric; Marchal, Maud

    2017-01-01

    In this paper, we present a real-time approach that allows tracking deformable structures in 3D ultrasound sequences. Our method consists in obtaining the target displacements by combining robust dense motion estimation and mechanical model simulation. We perform evaluation of our method through simulated data, phantom data, and real-data. Results demonstrate that this novel approach has the advantage of providing correct motion estimation regarding different ultrasound shortcomings including speckle noise, large shadows and ultrasound gain variation. Furthermore, we show the good performance of our method with respect to state-of-the-art techniques by testing on the 3D databases provided by MICCAI CLUST'14 and CLUST'15 challenges. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Real behavior in virtual environments: psychology experiments in a simple virtual-reality paradigm using video games.

    Science.gov (United States)

    Kozlov, Michail D; Johansen, Mark K

    2010-12-01

    The purpose of this research was to illustrate the broad usefulness of simple video-game-based virtual environments (VEs) for psychological research on real-world behavior. To this end, this research explored several high-level social phenomena in a simple, inexpensive computer-game environment: the reduced likelihood of helping under time pressure and the bystander effect, which is reduced helping in the presence of bystanders. In the first experiment, participants had to find the exit in a virtual labyrinth under either high or low time pressure. They encountered rooms with and without virtual bystanders, and in each room, a virtual person requested assistance. Participants helped significantly less frequently under time pressure but the presence/absence of a small number of bystanders did not significantly moderate helping. The second experiment increased the number of virtual bystanders, and participants were instructed to imagine that these were real people. Participants helped significantly less in rooms with large numbers of bystanders compared to rooms with no bystanders, thus demonstrating a bystander effect. These results indicate that even sophisticated high-level social behaviors can be observed and experimentally manipulated in simple VEs, thus implying the broad usefulness of this paradigm in psychological research as a good compromise between experimental control and ecological validity.

  8. Building intuitive 3D interfaces for virtual reality systems

    Science.gov (United States)

    Vaidya, Vivek; Suryanarayanan, Srikanth; Seitel, Mathias; Mullick, Rakesh

    2007-03-01

    An exploration of techniques for developing intuitive, and efficient user interfaces for virtual reality systems. Work seeks to understand which paradigms from the better-understood world of 2D user interfaces remain viable within 3D environments. In order to establish this a new user interface was created that applied various understood principles of interface design. A user study was then performed where it was compared with an earlier interface for a series of medical visualization tasks.

  9. Teaching Physics to Deaf College Students in a 3-D Virtual Lab

    Science.gov (United States)

    Robinson, Vicki

    2013-01-01

    Virtual worlds are used in many educational and business applications. At the National Technical Institute for the Deaf at Rochester Institute of Technology (NTID/RIT), deaf college students are introduced to the virtual world of Second Life, which is a 3-D immersive, interactive environment, accessed through computer software. NTID students use…

  10. Avatar (A’: Contrasting Lacan’s Theory and 3D Virtual Worlds.A Case Study In Second Life

    Directory of Open Access Journals (Sweden)

    Carlos Hernán González-Campo

    2013-01-01

    Full Text Available Lacan no propuso un sujeto totalizado, pero propuso uno dividido cuya representación se estructura en cada interacción con sus pares a través del lenguaje argumentativo de Saussure. Esto demuestra lo real, lo imaginario y lo simbólico como (a, (a ‘ o (A. Este estudio trata de proponer y discutir que es posible actualmente establecer cuestiones virtuales, teniendo en cuenta los efectos sociales y psicológicos del ci- berespacio y la capacidad de decidir y ejecutar acciones. Prácticamente, la representación es dada por el Avatar conocido como (A ‘, ya que es una evolución de las otras (A. Esta interacción se lleva a cabo mediante el uso del lenguaje, con la construcción de significados y significantes. Significados son concebidos en el mundo virtual y significantes en la real, pero el último podría permitir al primero materializar el Otro (A en el Avatar (A ‘. Second Life es un metaverso, un juego del rol multi- jugador masivo en línea (MMORPG, que muestra mundos virtuales en 3D en el que cada sujeto es capaz de crear sus avatares caracterizar su propia identidad a través de los deseos del sujeto.

  11. Virtual laboratories : comparability of real and virtual environments for environmental psychology

    NARCIS (Netherlands)

    Kort, de Y.A.W.; IJsselsteijn, W.A.; Kooijman, J.M.A.; Schuurmans, Y.

    2003-01-01

    Virtual environments have the potential to become important new research tools in environment behavior research. They could even become the future (virtual) laboratories, if reactions of people to virtual environments are similar to those in real environments. The present study is an exploration of

  12. 3D Adaptive Virtual Exhibit for the University of Denver Digital Collections

    Directory of Open Access Journals (Sweden)

    Shea-Tinn Yeh

    2015-07-01

    Full Text Available While the gaming industry has taken the world by storm with its three-dimensional (3D user interfaces, current digital collection exhibits presented by museums, historical societies, and libraries are still limited to a two-dimensional (2D interface display. Why can’t digital collections take advantage of this 3D interface advancement? The prototype discussed in this paper presents to the visitor a 3D virtual exhibit containing a set of digital objects from the University of Denver Libraries’ digital image collections, giving visitors an immersive experience when viewing the collections. In particular, the interface is adaptive to the visitor’s browsing behaviors and alters the selection and display of the objects throughout the exhibit to encourage serendipitous discovery. Social media features were also integrated to allow visitors to share items of interest and to create a sense of virtual community.

  13. Development of Virtual Reality Cycling Simulator

    OpenAIRE

    Schramka, Filip; Arisona, Stefan; Joos, Michael; Erath, Alexander

    2017-01-01

    This paper presents a cycling simulator implemented using consumer virtual reality hardware and additional off-the-shelf sensors. Challenges like real time motion tracking within the performance requirements of state of the art virtual reality are successfully mastered. Retrieved data from digital motion processors is sent over Bluetooth to a render machine running Unity3D. By processing this data a bicycle is mapped into virtual space. Physically correct behaviour is simulated and high quali...

  14. Toward Virtual Campuses: Collaborative Virtual Labs & Personalized Learning Services in a Real-Life Context

    OpenAIRE

    Tsekeridou, Sofia; Tiropanis, Thanassis; Christou, Ioannis; Vakilzadeh, Haleh

    2008-01-01

    Virtual campuses are gradually becoming a reality with the advances in e-learning and Web technologies, distributed systems and broadband communication, as well as the emerging needs of remote Universities for collaboration on offering common programs. The advances in grid-based distributed infrastructures have further significantly contributed to this fact providing optimized and real-time system performance and support for virtual communities even under synchronous distributed multi-user us...

  15. Use of real-time three-dimensional transesophageal echocardiography in type A aortic dissections: Advantages of 3D TEE illustrated in three cases

    Directory of Open Access Journals (Sweden)

    Cindy J Wang

    2015-01-01

    Full Text Available Stanford type A aortic dissections often present to the hospital requiring emergent surgical intervention. Initial diagnosis is usually made by computed tomography; however transesophageal echocardiography (TEE can further characterize aortic dissections with specific advantages: It may be performed on an unstable patient, it can be used intra-operatively, and it has the ability to provide continuous real-time information. Three-dimensional (3D TEE has become more accessible over recent years allowing it to serve as an additional tool in the operating room. We present a case series of three patients presenting with type A aortic dissections and the advantages of intra-operative 3D TEE to diagnose the extent of dissection in each case. Prior case reports have demonstrated the use of 3D TEE in type A aortic dissections to characterize the extent of dissection and involvement of neighboring structures. In our three cases described, 3D TEE provided additional understanding of spatial relationships between the dissection flap and neighboring structures such as the aortic valve and coronary orifices that were not fully appreciated with two-dimensional TEE, which affected surgical decisions in the operating room. This case series demonstrates the utility and benefit of real-time 3D TEE during intra-operative management of a type A aortic dissection.

  16. Virtual working systems to support R&D groups

    Science.gov (United States)

    Dew, Peter M.; Leigh, Christine; Drew, Richard S.; Morris, David; Curson, Jayne

    1995-03-01

    The paper reports on the progress at Leeds University to build a Virtual Science Park (VSP) to enhance the University's ability to interact with industry, grow its applied research and workplace learning activities. The VSP exploits the advances in real time collaborative computing and networking to provide an environment that meets the objectives of physically based science parks without the need for the organizations to relocate. It provides an integrated set of services (e.g. virtual consultancy, workbased learning) built around a structured person- centered information model. This model supports the integration of tools for: (a) navigating around the information space; (b) browsing information stored within the VSP database; (c) communicating through a variety of Person-to-Person collaborative tools; and (d) the ability to the information stored in the VSP including the relationships to other information that support the underlying model. The paper gives an overview of a generic virtual working system based on X.500 directory services and the World-Wide Web that can be used to support the Virtual Science Park. Finally the paper discusses some of the research issues that need to be addressed to fully realize a Virtual Science Park.

  17. Implementation of 3D-virtual brachytherapy in the management of breast cancer: a description of a new method of interstitial brachytherapy

    International Nuclear Information System (INIS)

    Vicini, Frank A.; Jaffray, David A.; Horwitz, Eric M.; Edmundson, Gregory K.; DeBiose, David A.; Kini, Vijay R.; Martinez, Alvaro A.

    1998-01-01

    preoperatively. Results: Intraoperative ultrasound was used to check the real-time position of the afterloading needles in reference to the chest wall and posterior border of the target volume. No adjustment of needles was required in any of the 11 patients. Assessment of target volume coverage between the virtual implant and the actual CT image of the implant showed excellent agreement. In each case, all target volume boundaries specified by the physician were adequately covered. The total number of implant planes, intertemplate separation, and template orientation were identical between the virtual and real implant. Conclusion: We conclude that 3D virtual brachytherapy may offer an improved technique for accurately performing interstitial implants of the breast with a closed lumpectomy cavity in selected patients. Although preliminary results show excellent coverage of the desired target volume, additional patients will be required to establish the reproducibility of this technique and its practical limitations

  18. LED Virtual Simulation based on Web3D

    OpenAIRE

    Lilan Liu; Liu Han; Zhiqi Lin; Manping Li; Tao Yu

    2014-01-01

    Regarding to the high price and low market popularity of current LED indoor lighting products, a LED indoor lighting platform is proposed based on Web3D technology. The internet virtual reality technology is integrated and applied into the LED collaborative e-commerce website with Virtools. According to the characteristics of the LED indoor lighting products, this paper introduced the method to build encapsulated model and three characteristics of LED lighting: geometrical, optical and behavi...

  19. Virtual inspector: a flexible visualizer for dense 3D scanned models

    OpenAIRE

    Callieri, Marco; Ponchio, Federico; Cignoni, Paolo; Scopigno, Roberto

    2008-01-01

    The rapid evolution of automatic shape acquisition technologies will make huge amount of sampled 3D data available in the near future. Cul- tural Heritage (CH) domain is one of the ideal fields of application of 3D scanned data, while some issues in the use of those data are: how to visualize at interactive rates and full quality on commodity computers; how to improve visualization ease of use; how to support the integrated visualization of a virtual 3D artwork and the multimedia data which t...

  20. Generation of 3D Virtual Geographic Environment Based on Laser Scanning Technique

    Institute of Scientific and Technical Information of China (English)

    DU Jie; CHEN Xiaoyong; FumioYamazaki

    2003-01-01

    This paper demonstrates an experiment on the generation of 3D virtual geographic environment on the basis of experimental flight laser scanning data by a set of algorithms and methods that were developed to automatically interpret range images for extracting geo-spatial features and then to reconstruct geo-objects. The algorithms and methods for the interpretation and modeling of laser scanner data include triangulated-irregular-network (TIN)-based range image interpolation ; mathematical-morphology(MM)-based range image filtering,feature extraction and range image segmentation, feature generalization and optimization, 3D objects reconstruction and modeling; computergraphics (CG)-based visualization and animation of geographic virtual reality environment.

  1. Virtual film technique used in 3d and step-shot IMRT planning check

    International Nuclear Information System (INIS)

    Wang, Y.; Zealey, W.; Deng, X.; Huang, S.; Qi, Z.

    2004-01-01

    Full text: A virtual film technique developed and used in segmented field dose reconstruction for IMRT planning dose distribution check. Film dosimetry analysis is commonly used for the isodose curve comparison but the result can be affected by film dosimetry technical problems, and the film processing also takes a significant amount of workload. This study is focused on using digital image technique to reconstruct dose distribution for a 3D plan by mapping water-scanning data on screen in black and white intensity value, and by simulating the film analysis process to plot equivalent Isodose curve for the planning Isodose comparison check. In-house developed software is used to select the TPR (Tissue-Phantom Ratio) and OCR (Off Central-Axis Ratio) data for different beam field types and sizes; each point dose of the field is interpolated and converted into the greyscale pixel value. The location of the pixel is calculated by the triangular function according to the beam entry position and gantry/collimator angles. After each segment field is processed, the program gathers all the segments and overlays the greyscale value pixel by pixel for all the segments into a combined map. The background value is calibrated to match the water scan curve background level. The penumbra slope is adjusted by an interpolated divergent angle according to the OAD (Off Central-Axis Distance) of the field. A normal film dosimetry analysis can then be performed to plot the Isodose curves. By comparing some typical fields with both single beam and segmented IMRT fields, with the point dose checked by ionization measurement, the central point dose discrepancy is within ±2% and the maximum 3-5% for a random point using TLD technique. Compare the Isodose overlaying result to planning curves for both perpendicular and lateral beam. Although the curve shape for the virtual film viewed is more artificial compared with real film, the results are easier to compare for the quantity analysis with

  2. Holovideo: Real-time 3D range video encoding and decoding on GPU

    Science.gov (United States)

    Karpinsky, Nikolaus; Zhang, Song

    2012-02-01

    We present a 3D video-encoding technique called Holovideo that is capable of encoding high-resolution 3D videos into standard 2D videos, and then decoding the 2D videos back into 3D rapidly without significant loss of quality. Due to the nature of the algorithm, 2D video compression such as JPEG encoding with QuickTime Run Length Encoding (QTRLE) can be applied with little quality loss, resulting in an effective way to store 3D video at very small file sizes. We found that under a compression ratio of 134:1, Holovideo to OBJ file format, the 3D geometry quality drops at a negligible level. Several sets of 3D videos were captured using a structured light scanner, compressed using the Holovideo codec, and then uncompressed and displayed to demonstrate the effectiveness of the codec. With the use of OpenGL Shaders (GLSL), the 3D video codec can encode and decode in realtime. We demonstrated that for a video size of 512×512, the decoding speed is 28 frames per second (FPS) with a laptop computer using an embedded NVIDIA GeForce 9400 m graphics processing unit (GPU). Encoding can be done with this same setup at 18 FPS, making this technology suitable for applications such as interactive 3D video games and 3D video conferencing.

  3. Real-Time View Correction for Mobile Devices.

    Science.gov (United States)

    Schops, Thomas; Oswald, Martin R; Speciale, Pablo; Yang, Shuoran; Pollefeys, Marc

    2017-11-01

    We present a real-time method for rendering novel virtual camera views from given RGB-D (color and depth) data of a different viewpoint. Missing color and depth information due to incomplete input or disocclusions is efficiently inpainted in a temporally consistent way. The inpainting takes the location of strong image gradients into account as likely depth discontinuities. We present our method in the context of a view correction system for mobile devices, and discuss how to obtain a screen-camera calibration and options for acquiring depth input. Our method has use cases in both augmented and virtual reality applications. We demonstrate the speed of our system and the visual quality of its results in multiple experiments in the paper as well as in the supplementary video.

  4. Real-time measurement of dynamic structure for Pd-D system in heavy-water electrolysis cell

    International Nuclear Information System (INIS)

    Wang Jun; Zeng Xianxin; Yang Jilian; Zhang Baisheng; Ruan Jinghui

    1993-01-01

    The real-time dynamic structure of Pd-D system in D 2 O electrolysis cell is measured on neutron powder diffractometer in CIAE. Diffraction patterns in 2 θ range of 34 degree-95 degree are obtained under the conditions of electrolysing for 0, 3 and 48 A ·h respectively, and the gradual transition of Pd-D system from α-phase to β-phase is observed. The real-time measurements of β peak of (220) reflection show that intensity of β peak almost reaches the saturation point after electrolysing for 0.65 A · h and increases slowly with further electrolysis afterwards

  5. Modeling 3D Unknown object by Range Finder and Video Camera ...

    African Journals Online (AJOL)

    real world); proprioceptive and exteroceptive sensors allowing the recreating of the 3D geometric database of an environment (virtual world). The virtual world is projected onto a video display terminal (VDT). Computer-generated and video ...

  6. Real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy.

    Science.gov (United States)

    Li, Ruijiang; Jia, Xun; Lewis, John H; Gu, Xuejun; Folkerts, Michael; Men, Chunhua; Jiang, Steve B

    2010-06-01

    To develop an algorithm for real-time volumetric image reconstruction and 3D tumor localization based on a single x-ray projection image for lung cancer radiotherapy. Given a set of volumetric images of a patient at N breathing phases as the training data, deformable image registration was performed between a reference phase and the other N-1 phases, resulting in N-1 deformation vector fields (DVFs). These DVFs can be represented efficiently by a few eigenvectors and coefficients obtained from principal component analysis (PCA). By varying the PCA coefficients, new DVFs can be generated, which, when applied on the reference image, lead to new volumetric images. A volumetric image can then be reconstructed from a single projection image by optimizing the PCA coefficients such that its computed projection matches the measured one. The 3D location of the tumor can be derived by applying the inverted DVF on its position in the reference image. The algorithm was implemented on graphics processing units (GPUs) to achieve real-time efficiency. The training data were generated using a realistic and dynamic mathematical phantom with ten breathing phases. The testing data were 360 cone beam projections corresponding to one gantry rotation, simulated using the same phantom with a 50% increase in breathing amplitude. The average relative image intensity error of the reconstructed volumetric images is 6.9% +/- 2.4%. The average 3D tumor localization error is 0.8 +/- 0.5 mm. On an NVIDIA Tesla C1060 GPU card, the average computation time for reconstructing a volumetric image from each projection is 0.24 s (range: 0.17 and 0.35 s). The authors have shown the feasibility of reconstructing volumetric images and localizing tumor positions in 3D in near real-time from a single x-ray image.

  7. Enhanced LOD Concepts for Virtual 3d City Models

    Science.gov (United States)

    Benner, J.; Geiger, A.; Gröger, G.; Häfele, K.-H.; Löwner, M.-O.

    2013-09-01

    Virtual 3D city models contain digital three dimensional representations of city objects like buildings, streets or technical infrastructure. Because size and complexity of these models continuously grow, a Level of Detail (LoD) concept effectively supporting the partitioning of a complete model into alternative models of different complexity and providing metadata, addressing informational content, complexity and quality of each alternative model is indispensable. After a short overview on various LoD concepts, this paper discusses the existing LoD concept of the CityGML standard for 3D city models and identifies a number of deficits. Based on this analysis, an alternative concept is developed and illustrated with several examples. It differentiates between first, a Geometric Level of Detail (GLoD) and a Semantic Level of Detail (SLoD), and second between the interior building and its exterior shell. Finally, a possible implementation of the new concept is demonstrated by means of an UML model.

  8. Eco-Dialogical Learning and Translanguaging in Open-Ended 3D Virtual Learning Environments: Where Place, Time, and Objects Matter

    Science.gov (United States)

    Zheng, Dongping; Schmidt, Matthew; Hu, Ying; Liu, Min; Hsu, Jesse

    2017-01-01

    The purpose of this research was to explore the relationships between design, learning, and translanguaging in a 3D collaborative virtual learning environment for adolescent learners of Chinese and English. We designed an open-ended space congruent with ecological and dialogical perspectives on second language acquisition. In such a space,…

  9. Gait adaptation to visual kinematic perturbations using a real-time closed-loop brain-computer interface to a virtual reality avatar.

    Science.gov (United States)

    Luu, Trieu Phat; He, Yongtian; Brown, Samuel; Nakagame, Sho; Contreras-Vidal, Jose L

    2016-06-01

    The control of human bipedal locomotion is of great interest to the field of lower-body brain-computer interfaces (BCIs) for gait rehabilitation. While the feasibility of closed-loop BCI systems for the control of a lower body exoskeleton has been recently shown, multi-day closed-loop neural decoding of human gait in a BCI virtual reality (BCI-VR) environment has yet to be demonstrated. BCI-VR systems provide valuable alternatives for movement rehabilitation when wearable robots are not desirable due to medical conditions, cost, accessibility, usability, or patient preferences. In this study, we propose a real-time closed-loop BCI that decodes lower limb joint angles from scalp electroencephalography (EEG) during treadmill walking to control a walking avatar in a virtual environment. Fluctuations in the amplitude of slow cortical potentials of EEG in the delta band (0.1-3 Hz) were used for prediction; thus, the EEG features correspond to time-domain amplitude modulated potentials in the delta band. Virtual kinematic perturbations resulting in asymmetric walking gait patterns of the avatar were also introduced to investigate gait adaptation using the closed-loop BCI-VR system over a period of eight days. Our results demonstrate the feasibility of using a closed-loop BCI to learn to control a walking avatar under normal and altered visuomotor perturbations, which involved cortical adaptations. The average decoding accuracies (Pearson's r values) in real-time BCI across all subjects increased from (Hip: 0.18 ± 0.31; Knee: 0.23 ± 0.33; Ankle: 0.14 ± 0.22) on Day 1 to (Hip: 0.40 ± 0.24; Knee: 0.55 ± 0.20; Ankle: 0.29 ± 0.22) on Day 8. These findings have implications for the development of a real-time closed-loop EEG-based BCI-VR system for gait rehabilitation after stroke and for understanding cortical plasticity induced by a closed-loop BCI-VR system.

  10. Gait adaptation to visual kinematic perturbations using a real-time closed-loop brain-computer interface to a virtual reality avatar

    Science.gov (United States)

    Phat Luu, Trieu; He, Yongtian; Brown, Samuel; Nakagome, Sho; Contreras-Vidal, Jose L.

    2016-06-01

    Objective. The control of human bipedal locomotion is of great interest to the field of lower-body brain-computer interfaces (BCIs) for gait rehabilitation. While the feasibility of closed-loop BCI systems for the control of a lower body exoskeleton has been recently shown, multi-day closed-loop neural decoding of human gait in a BCI virtual reality (BCI-VR) environment has yet to be demonstrated. BCI-VR systems provide valuable alternatives for movement rehabilitation when wearable robots are not desirable due to medical conditions, cost, accessibility, usability, or patient preferences. Approach. In this study, we propose a real-time closed-loop BCI that decodes lower limb joint angles from scalp electroencephalography (EEG) during treadmill walking to control a walking avatar in a virtual environment. Fluctuations in the amplitude of slow cortical potentials of EEG in the delta band (0.1-3 Hz) were used for prediction; thus, the EEG features correspond to time-domain amplitude modulated potentials in the delta band. Virtual kinematic perturbations resulting in asymmetric walking gait patterns of the avatar were also introduced to investigate gait adaptation using the closed-loop BCI-VR system over a period of eight days. Main results. Our results demonstrate the feasibility of using a closed-loop BCI to learn to control a walking avatar under normal and altered visuomotor perturbations, which involved cortical adaptations. The average decoding accuracies (Pearson’s r values) in real-time BCI across all subjects increased from (Hip: 0.18 ± 0.31 Knee: 0.23 ± 0.33 Ankle: 0.14 ± 0.22) on Day 1 to (Hip: 0.40 ± 0.24 Knee: 0.55 ± 0.20 Ankle: 0.29 ± 0.22) on Day 8. Significance. These findings have implications for the development of a real-time closed-loop EEG-based BCI-VR system for gait rehabilitation after stroke and for understanding cortical plasticity induced by a closed-loop BCI-VR system.

  11. Navigation and wayfinding in learning spaces in 3D virtual worlds

    OpenAIRE

    Minocha, Shailey; Hardy, Christopher

    2016-01-01

    There is a lack of published research on the design guidelines of learning spaces in virtual worlds. Therefore, when institutions aspire to create learning spaces in Second Life, there are few studies or guidelines to inform them except for individual case studies. The Design of Learning Spaces in 3D Virtual Environments (DELVE) project, funded by the Joint Information Systems Committee in the UK, was one of the first initiatives that identified through empirical investigations the usability ...

  12. A New Navigation System of Renal Puncture for Endoscopic Combined Intrarenal Surgery: Real-time Virtual Sonography-guided Renal Access.

    Science.gov (United States)

    Hamamoto, Shuzo; Unno, Rei; Taguchi, Kazumi; Ando, Ryosuke; Hamakawa, Takashi; Naiki, Taku; Okada, Shinsuke; Inoue, Takaaki; Okada, Atsushi; Kohri, Kenjiro; Yasui, Takahiro

    2017-11-01

    To evaluate the clinical utility of a new navigation technique for percutaneous renal puncture using real-time virtual sonography (RVS) during endoscopic combined intrarenal surgery. Thirty consecutive patients who underwent endoscopic combined intrarenal surgery for renal calculi, between April 2014 and July 2015, were divided into the RVS-guided puncture (RVS; n = 15) group and the ultrasonography-guided puncture (US; n = 15) group. In the RVS group, renal puncture was repeated until precise piercing of a papilla was achieved under direct endoscopic vision, using the RVS system to synchronize the real-time US image with the preoperative computed tomography image. In the US group, renal puncture was performed under US guidance only. In both groups, 2 urologists worked simultaneously to fragment the renal calculi after inserting the miniature percutaneous tract. The mean sizes of the renal calculi in the RVS and the US group were 33.5 and 30.5 mm, respectively. A lower mean number of puncture attempts until renal access through the calyx was needed for the RVS compared with the US group (1.6 vs 3.4 times, respectively; P = .001). The RVS group had a lower mean postoperative hemoglobin decrease (0.93 vs 1.39 g/dL, respectively; P = .04), but with no between-group differences with regard to operative time, tubeless rate, and stone-free rate. None of the patients in the RVS group experienced postoperative complications of a Clavien score ≥2, with 3 patients experiencing such complications in the US group. RVS-guided renal puncture was effective, with a lower incidence of bleeding-related complications compared with US-guided puncture. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Virtual and Printed 3D Models for Teaching Crystal Symmetry and Point Groups

    Science.gov (United States)

    Casas, Lluís; Estop, Euge`nia

    2015-01-01

    Both, virtual and printed 3D crystal models can help students and teachers deal with chemical education topics such as symmetry and point groups. In the present paper, two freely downloadable tools (interactive PDF files and a mobile app) are presented as examples of the application of 3D design to study point-symmetry. The use of 3D printing to…

  14. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    International Nuclear Information System (INIS)

    Min Yugang; Santhanam, Anand; Ruddy, Bari H; Neelakkantan, Harini; Meeks, Sanford L; Kupelian, Patrick A

    2010-01-01

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  15. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion

    Energy Technology Data Exchange (ETDEWEB)

    Min Yugang; Santhanam, Anand; Ruddy, Bari H [University of Central Florida, FL (United States); Neelakkantan, Harini; Meeks, Sanford L [M D Anderson Cancer Center Orlando, FL (United States); Kupelian, Patrick A, E-mail: anand.santhanam@orlandohealth.co [Department of Radiation Oncology, University of California, Los Angeles, CA (United States)

    2010-09-07

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  16. A GPU-based framework for modeling real-time 3D lung tumor conformal dosimetry with subject-specific lung tumor motion.

    Science.gov (United States)

    Min, Yugang; Santhanam, Anand; Neelakkantan, Harini; Ruddy, Bari H; Meeks, Sanford L; Kupelian, Patrick A

    2010-09-07

    In this paper, we present a graphics processing unit (GPU)-based simulation framework to calculate the delivered dose to a 3D moving lung tumor and its surrounding normal tissues, which are undergoing subject-specific lung deformations. The GPU-based simulation framework models the motion of the 3D volumetric lung tumor and its surrounding tissues, simulates the dose delivery using the dose extracted from a treatment plan using Pinnacle Treatment Planning System, Phillips, for one of the 3DCTs of the 4DCT and predicts the amount and location of radiation doses deposited inside the lung. The 4DCT lung datasets were registered with each other using a modified optical flow algorithm. The motion of the tumor and the motion of the surrounding tissues were simulated by measuring the changes in lung volume during the radiotherapy treatment using spirometry. The real-time dose delivered to the tumor for each beam is generated by summing the dose delivered to the target volume at each increase in lung volume during the beam delivery time period. The simulation results showed the real-time capability of the framework at 20 discrete tumor motion steps per breath, which is higher than the number of 4DCT steps (approximately 12) reconstructed during multiple breathing cycles.

  17. Gaze3DFix: Detecting 3D fixations with an ellipsoidal bounding volume.

    Science.gov (United States)

    Weber, Sascha; Schubert, Rebekka S; Vogt, Stefan; Velichkovsky, Boris M; Pannasch, Sebastian

    2017-10-26

    Nowadays, the use of eyetracking to determine 2-D gaze positions is common practice, and several approaches to the detection of 2-D fixations exist, but ready-to-use algorithms to determine eye movements in three dimensions are still missing. Here we present a dispersion-based algorithm with an ellipsoidal bounding volume that estimates 3D fixations. Therefore, 3D gaze points are obtained using a vector-based approach and are further processed with our algorithm. To evaluate the accuracy of our method, we performed experimental studies with real and virtual stimuli. We obtained good congruence between stimulus position and both the 3D gaze points and the 3D fixation locations within the tested range of 200-600 mm. The mean deviation of the 3D fixations from the stimulus positions was 17 mm for the real as well as for the virtual stimuli, with larger variances at increasing stimulus distances. The described algorithms are implemented in two dynamic linked libraries (Gaze3D.dll and Fixation3D.dll), and we provide a graphical user interface (Gaze3DFixGUI.exe) that is designed for importing 2-D binocular eyetracking data and calculating both 3D gaze points and 3D fixations using the libraries. The Gaze3DFix toolkit, including both libraries and the graphical user interface, is available as open-source software at https://github.com/applied-cognition-research/Gaze3DFix .

  18. Shared virtual environments for telerehabilitation.

    Science.gov (United States)

    Popescu, George V; Burdea, Grigore; Boian, Rares

    2002-01-01

    Current VR telerehabilitation systems use offline remote monitoring from the clinic and patient-therapist videoconferencing. Such "store and forward" and video-based systems cannot implement medical services involving patient therapist direct interaction. Real-time telerehabilitation applications (including remote therapy) can be developed using a shared Virtual Environment (VE) architecture. We developed a two-user shared VE for hand telerehabilitation. Each site has a telerehabilitation workstation with a videocamera and a Rutgers Master II (RMII) force feedback glove. Each user can control a virtual hand and interact hapticly with virtual objects. Simulated physical interactions between therapist and patient are implemented using hand force feedback. The therapist's graphic interface contains several virtual panels, which allow control over the rehabilitation process. These controls start a videoconferencing session, collect patient data, or apply therapy. Several experimental telerehabilitation scenarios were successfully tested on a LAN. A Web-based approach to "real-time" patient telemonitoring--the monitoring portal for hand telerehabilitation--was also developed. The therapist interface is implemented as a Java3D applet that monitors patient hand movement. The monitoring portal gives real-time performance on off-the-shelf desktop workstations.

  19. Supporting Distributed Team Working in 3D Virtual Worlds: A Case Study in Second Life

    Science.gov (United States)

    Minocha, Shailey; Morse, David R.

    2010-01-01

    Purpose: The purpose of this paper is to report on a study into how a three-dimensional (3D) virtual world (Second Life) can facilitate socialisation and team working among students working on a team project at a distance. This models the situation in many commercial sectors where work is increasingly being conducted across time zones and between…

  20. Education System Using Interactive 3D Computer Graphics (3D-CG) Animation and Scenario Language for Teaching Materials

    Science.gov (United States)

    Matsuda, Hiroshi; Shindo, Yoshiaki

    2006-01-01

    The 3D computer graphics (3D-CG) animation using a virtual actor's speaking is very effective as an educational medium. But it takes a long time to produce a 3D-CG animation. To reduce the cost of producing 3D-CG educational contents and improve the capability of the education system, we have developed a new education system using Virtual Actor.…

  1. An interactive three-dimensional virtual body structures system for anatomical training over the internet.

    Science.gov (United States)

    Temkin, Bharti; Acosta, Eric; Malvankar, Ameya; Vaidyanath, Sreeram

    2006-04-01

    The Visible Human digital datasets make it possible to develop computer-based anatomical training systems that use virtual anatomical models (virtual body structures-VBS). Medical schools are combining these virtual training systems and classical anatomy teaching methods that use labeled images and cadaver dissection. In this paper we present a customizable web-based three-dimensional anatomy training system, W3D-VBS. W3D-VBS uses National Library of Medicine's (NLM) Visible Human Male datasets to interactively locate, explore, select, extract, highlight, label, and visualize, realistic 2D (using axial, coronal, and sagittal views) and 3D virtual structures. A real-time self-guided virtual tour of the entire body is designed to provide detailed anatomical information about structures, substructures, and proximal structures. The system thus facilitates learning of visuospatial relationships at a level of detail that may not be possible by any other means. The use of volumetric structures allows for repeated real-time virtual dissections, from any angle, at the convenience of the user. Volumetric (3D) virtual dissections are performed by adding, removing, highlighting, and labeling individual structures (and/or entire anatomical systems). The resultant virtual explorations (consisting of anatomical 2D/3D illustrations and animations), with user selected highlighting colors and label positions, can be saved and used for generating lesson plans and evaluation systems. Tracking users' progress using the evaluation system helps customize the curriculum, making W3D-VBS a powerful learning tool. Our plan is to incorporate other Visible Human segmented datasets, especially datasets with higher resolutions, that make it possible to include finer anatomical structures such as nerves and small vessels. (c) 2006 Wiley-Liss, Inc.

  2. Study of 3D visualization of fast active reflector based on openGL and EPICS

    International Nuclear Information System (INIS)

    Luo Mingcheng; Wu Wenqing; Liu Jiajing; Tang Pengyi; Wang Jian

    2014-01-01

    Active Reflector is the one of the innovations of Five hundred meter Aperture Spherical Telescope (FAST). Its performance will influence the performance of whole telescope and for display all status of ARS in real time, the EPICS (Experimental Physics and Industrial Control System) is used to develop the control system of ARS and virtual 3D technology-OpenGL is used to visualize the status. For the real-time performance of EPICS, the status visualization is also display in real time for users to improve the efficiency of telescope observing. (authors)

  3. SUPRA: open-source software-defined ultrasound processing for real-time applications : A 2D and 3D pipeline from beamforming to B-mode.

    Science.gov (United States)

    Göbl, Rüdiger; Navab, Nassir; Hennersperger, Christoph

    2018-06-01

    Research in ultrasound imaging is limited in reproducibility by two factors: First, many existing ultrasound pipelines are protected by intellectual property, rendering exchange of code difficult. Second, most pipelines are implemented in special hardware, resulting in limited flexibility of implemented processing steps on such platforms. With SUPRA, we propose an open-source pipeline for fully software-defined ultrasound processing for real-time applications to alleviate these problems. Covering all steps from beamforming to output of B-mode images, SUPRA can help improve the reproducibility of results and make modifications to the image acquisition mode accessible to the research community. We evaluate the pipeline qualitatively, quantitatively, and regarding its run time. The pipeline shows image quality comparable to a clinical system and backed by point spread function measurements a comparable resolution. Including all processing stages of a usual ultrasound pipeline, the run-time analysis shows that it can be executed in 2D and 3D on consumer GPUs in real time. Our software ultrasound pipeline opens up the research in image acquisition. Given access to ultrasound data from early stages (raw channel data, radiofrequency data), it simplifies the development in imaging. Furthermore, it tackles the reproducibility of research results, as code can be shared easily and even be executed without dedicated ultrasound hardware.

  4. Real-time tracking of visually attended objects in virtual environments and its application to LOD.

    Science.gov (United States)

    Lee, Sungkil; Kim, Gerard Jounghyun; Choi, Seungmoon

    2009-01-01

    This paper presents a real-time framework for computationally tracking objects visually attended by the user while navigating in interactive virtual environments. In addition to the conventional bottom-up (stimulus-driven) saliency map, the proposed framework uses top-down (goal-directed) contexts inferred from the user's spatial and temporal behaviors, and identifies the most plausibly attended objects among candidates in the object saliency map. The computational framework was implemented using GPU, exhibiting high computational performance adequate for interactive virtual environments. A user experiment was also conducted to evaluate the prediction accuracy of the tracking framework by comparing objects regarded as visually attended by the framework to actual human gaze collected with an eye tracker. The results indicated that the accuracy was in the level well supported by the theory of human cognition for visually identifying single and multiple attentive targets, especially owing to the addition of top-down contextual information. Finally, we demonstrate how the visual attention tracking framework can be applied to managing the level of details in virtual environments, without any hardware for head or eye tracking.

  5. Real-time deformation of human soft tissues: A radial basis meshless 3D model based on Marquardt's algorithm.

    Science.gov (United States)

    Zhou, Jianyong; Luo, Zu; Li, Chunquan; Deng, Mi

    2018-01-01

    When the meshless method is used to establish the mathematical-mechanical model of human soft tissues, it is necessary to define the space occupied by human tissues as the problem domain and the boundary of the domain as the surface of those tissues. Nodes should be distributed in both the problem domain and on the boundaries. Under external force, the displacement of the node is computed by the meshless method to represent the deformation of biological soft tissues. However, computation by the meshless method consumes too much time, which will affect the simulation of real-time deformation of human tissues in virtual surgery. In this article, the Marquardt's Algorithm is proposed to fit the nodal displacement at the problem domain's boundary and obtain the relationship between surface deformation and force. When different external forces are applied, the deformation of soft tissues can be quickly obtained based on this relationship. The analysis and discussion show that the improved model equations with Marquardt's Algorithm not only can simulate the deformation in real-time but also preserve the authenticity of the deformation model's physical properties. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Towards the development of a 3D digital city model as a real extension of public urban spaces

    DEFF Research Database (Denmark)

    Tournay, Bruno

    ; it only serves as a tool in the analogue world. The model is a passive picture for contemplation.   Another way of looking at a digital 3D model is to see it not as a virtual model of reality but as a real model that must fulfil real functions and to design it as a space of transition between the local...... new approaches to communication and participation. Who controls the Electronic Neighbourhood? Just as in the analogue world, control of central places in the digital world is power.   Finally, based on the experience gained in relation to the project, the paper will outline some guidelines for better...

  7. Real-time 3D visualization of cellular rearrangements during cardiac valve formation.

    Science.gov (United States)

    Pestel, Jenny; Ramadass, Radhan; Gauvrit, Sebastien; Helker, Christian; Herzog, Wiebke; Stainier, Didier Y R

    2016-06-15

    During cardiac valve development, the single-layered endocardial sheet at the atrioventricular canal (AVC) is remodeled into multilayered immature valve leaflets. Most of our knowledge about this process comes from examining fixed samples that do not allow a real-time appreciation of the intricacies of valve formation. Here, we exploit non-invasive in vivo imaging techniques to identify the dynamic cell behaviors that lead to the formation of the immature valve leaflets. We find that in zebrafish, the valve leaflets consist of two sets of endocardial cells at the luminal and abluminal side, which we refer to as luminal cells (LCs) and abluminal cells (ALCs), respectively. By analyzing cellular rearrangements during valve formation, we observed that the LCs and ALCs originate from the atrium and ventricle, respectively. Furthermore, we utilized Wnt/β-catenin and Notch signaling reporter lines to distinguish between the LCs and ALCs, and also found that cardiac contractility and/or blood flow is necessary for the endocardial expression of these signaling reporters. Thus, our 3D analyses of cardiac valve formation in zebrafish provide fundamental insights into the cellular rearrangements underlying this process. © 2016. Published by The Company of Biologists Ltd.

  8. SU-G-BRA-01: A Real-Time Tumor Localization and Guidance Platform for Radiotherapy Using US and MRI

    International Nuclear Information System (INIS)

    Bednarz, B; Culberson, W; Bassetti, M; McMillan, A; Matrosic, C; Shepard, A; Zagzebski, J; Smith, S; Lee, W; Mills, D; Cao, K; Wang, B; Fiveland, E; Darrow, R; Foo, T

    2016-01-01

    Purpose: To develop and validate a real-time motion management platform for radiotherapy that directly tracks tumor motion using ultrasound and MRI. This will be a cost-effective and non-invasive real-time platform combining the excellent temporal resolution of ultrasound with the excellent soft-tissue contrast of MRI. Methods: A 4D planar ultrasound acquisition during the treatment that is coupled to a pre-treatment calibration training image set consisting of a simultaneous 4D ultrasound and 4D MRI acquisition. The image sets will be rapidly matched using advanced image and signal processing algorithms, allowing the display of virtual MR images of the tumor/organ motion in real-time from an ultrasound acquisition. Results: The completion of this work will result in several innovations including: a (2D) patch-like, MR and LINAC compatible 4D planar ultrasound transducer that is electronically steerable for hands-free operation to provide real-time virtual MR and ultrasound imaging for motion management during radiation therapy; a multi- modal tumor localization strategy that uses ultrasound and MRI; and fast and accurate image processing algorithms that provide real-time information about the motion and location of tumor or related soft-tissue structures within the patient. Conclusion: If successful, the proposed approach will provide real-time guidance for radiation therapy without degrading image or treatment plan quality. The approach would be equally suitable for image-guided proton beam or heavy ion-beam therapy. This work is partially funded by NIH grant R01CA190298

  9. SU-G-BRA-01: A Real-Time Tumor Localization and Guidance Platform for Radiotherapy Using US and MRI

    Energy Technology Data Exchange (ETDEWEB)

    Bednarz, B; Culberson, W; Bassetti, M; McMillan, A; Matrosic, C; Shepard, A; Zagzebski, J [University of Wisconsin, Madison, WI (United States); Smith, S; Lee, W; Mills, D; Cao, K; Wang, B; Fiveland, E; Darrow, R; Foo, T [GE Global Research Center, Niskayuna, NY (United States)

    2016-06-15

    Purpose: To develop and validate a real-time motion management platform for radiotherapy that directly tracks tumor motion using ultrasound and MRI. This will be a cost-effective and non-invasive real-time platform combining the excellent temporal resolution of ultrasound with the excellent soft-tissue contrast of MRI. Methods: A 4D planar ultrasound acquisition during the treatment that is coupled to a pre-treatment calibration training image set consisting of a simultaneous 4D ultrasound and 4D MRI acquisition. The image sets will be rapidly matched using advanced image and signal processing algorithms, allowing the display of virtual MR images of the tumor/organ motion in real-time from an ultrasound acquisition. Results: The completion of this work will result in several innovations including: a (2D) patch-like, MR and LINAC compatible 4D planar ultrasound transducer that is electronically steerable for hands-free operation to provide real-time virtual MR and ultrasound imaging for motion management during radiation therapy; a multi- modal tumor localization strategy that uses ultrasound and MRI; and fast and accurate image processing algorithms that provide real-time information about the motion and location of tumor or related soft-tissue structures within the patient. Conclusion: If successful, the proposed approach will provide real-time guidance for radiation therapy without degrading image or treatment plan quality. The approach would be equally suitable for image-guided proton beam or heavy ion-beam therapy. This work is partially funded by NIH grant R01CA190298.

  10. Real Time Monitor of Grid job executions

    International Nuclear Information System (INIS)

    Colling, D J; Martyniak, J; McGough, A S; Krenek, A; Sitera, J; Mulac, M; Dvorak, F

    2010-01-01

    In this paper we describe the architecture and operation of the Real Time Monitor (RTM), developed by the Grid team in the HEP group at Imperial College London. This is arguably the most popular dissemination tool within the EGEE [1] Grid. Having been used, on many occasions including GridFest and LHC inauguration events held at CERN in October 2008. The RTM gathers information from EGEE sites hosting Logging and Bookkeeping (LB) services. Information is cached locally at a dedicated server at Imperial College London and made available for clients to use in near real time. The system consists of three main components: the RTM server, enquirer and an apache Web Server which is queried by clients. The RTM server queries the LB servers at fixed time intervals, collecting job related information and storing this in a local database. Job related data includes not only job state (i.e. Scheduled, Waiting, Running or Done) along with timing information but also other attributes such as Virtual Organization and Computing Element (CE) queue - if known. The job data stored in the RTM database is read by the enquirer every minute and converted to an XML format which is stored on a Web Server. This decouples the RTM server database from the client removing the bottleneck problem caused by many clients simultaneously accessing the database. This information can be visualized through either a 2D or 3D Java based client with live job data either being overlaid on to a 2 dimensional map of the world or rendered in 3 dimensions over a globe map using OpenGL.

  11. A 3D character animation engine for multimodal interaction on mobile devices

    Science.gov (United States)

    Sandali, Enrico; Lavagetto, Fabio; Pisano, Paolo

    2005-03-01

    Talking virtual characters are graphical simulations of real or imaginary persons that enable natural and pleasant multimodal interaction with the user, by means of voice, eye gaze, facial expression and gestures. This paper presents an implementation of a 3D virtual character animation and rendering engine, compliant with the MPEG-4 standard, running on Symbian-based SmartPhones. Real-time animation of virtual characters on mobile devices represents a challenging task, since many limitations must be taken into account with respect to processing power, graphics capabilities, disk space and execution memory size. The proposed optimization techniques allow to overcome these issues, guaranteeing a smooth and synchronous animation of facial expressions and lip movements on mobile phones such as Sony-Ericsson's P800 and Nokia's 6600. The animation engine is specifically targeted to the development of new "Over The Air" services, based on embodied conversational agents, with applications in entertainment (interactive story tellers), navigation aid (virtual guides to web sites and mobile services), news casting (virtual newscasters) and education (interactive virtual teachers).

  12. Light cones in relativity: Real, complex, and virtual, with applications

    International Nuclear Information System (INIS)

    Adamo, T. M.; Newman, E. T.

    2011-01-01

    We study geometric structures associated with shear-free null geodesic congruences in Minkowski space-time and asymptotically shear-free null geodesic congruences in asymptotically flat space-times. We show how in both the flat and asymptotically flat settings, complexified future null infinity I C + acts as a ''holographic screen,'' interpolating between two dual descriptions of the null geodesic congruence. One description constructs a complex null geodesic congruence in a complex space-time whose source is a complex worldline, a virtual source as viewed from the holographic screen. This complex null geodesic congruence intersects the real asymptotic boundary when its source lies on a particular open-string type structure in the complex space-time. The other description constructs a real, twisting, shear-free or asymptotically shear-free null geodesic congruence in the real space-time, whose source (at least in Minkowski space) is in general a closed-string structure: the caustic set of the congruence. Finally we show that virtually all of the interior space-time physical quantities that are identified at null infinity I + (center of mass, spin, angular momentum, linear momentum, and force) are given kinematic meaning and dynamical descriptions in terms of the complex worldline.

  13. Real-time 2D/3D registration using kV-MV image pairs for tumor motion tracking in image guided radiotherapy.

    Science.gov (United States)

    Furtado, Hugo; Steiner, Elisabeth; Stock, Markus; Georg, Dietmar; Birkfellner, Wolfgang

    2013-10-01

    Intra-fractional respiratory motion during radiotherapy leads to a larger planning target volume (PTV). Real-time tumor motion tracking by two-dimensional (2D)/3D registration using on-board kilo-voltage (kV) imaging can allow for a reduction of the PTV though motion along the imaging beam axis cannot be resolved using only one projection image. We present a retrospective patient study investigating the impact of paired portal mega-voltage (MV) and kV images on registration accuracy. Material and methods. We used data from 10 patients suffering from non-small cell lung cancer (NSCLC) undergoing stereotactic body radiation therapy (SBRT) lung treatment. For each patient we acquired a planning computed tomography (CT) and sequences of kV and MV images during treatment. We compared the accuracy of motion tracking in six degrees-of-freedom (DOF) using the anterior-posterior (AP) kV sequence or the sequence of kV-MV image pairs. Results. Motion along cranial-caudal direction could accurately be extracted when using only the kV sequence but in AP direction we obtained large errors. When using kV-MV pairs, the average error was reduced from 2.9 mm to 1.5 mm and the motion along AP was successfully extracted. Mean registration time was 188 ms. Conclusion. Our evaluation shows that using kV-MV image pairs leads to improved motion extraction in six DOF and is suitable for real-time tumor motion tracking with a conventional LINAC.

  14. Programmering af applikationer med dSPACE real-time værktøjer

    DEFF Research Database (Denmark)

    Voigt, Kristian

    1998-01-01

    real-time direkte på et virkeligt system. Reguleringen af systemet sker vha. et DSP- og I/O-kort. En model af systemet opbygges vha. Matlab/Simulink fra firmaet The Mathworks. Modellen oversættes til C-kode vha. Real-Time Workshop fra firmaet The Mathworks. For at gøre C-koden hardwarespecifik bruges...... software fra hardwareleverandøren - dSPACE. Firmaet dSPACE har desuden levereret software til at monitorere real-time værdier i systemet, og software til at ændre på parametre i modellen i real-time.I brugervejledningens Appendix F findes et eksempel, der gennemgår hele forløbet med opstart af programmer...

  15. Scala for Real-Time Systems?

    DEFF Research Database (Denmark)

    Schoeberl, Martin

    2015-01-01

    Java served well as a general-purpose language. However, during its two decades of constant change it has gotten some weight and legacy in the language syntax and the libraries. Furthermore, Java's success for real-time systems is mediocre. Scala is a modern object-oriented and functional language...... with interesting new features. Although a new language, it executes on a Java virtual machine, reusing that technology. This paper explores Scala as language for future real-time systems....

  16. 3D-e-Chem-VM: Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine

    NARCIS (Netherlands)

    McGuire, R.; Verhoeven, S.; Vass, M.; Vriend, G.; Esch, I.J. de; Lusher, S.J.; Leurs, R.; Ridder, L.; Kooistra, A.J.; Ritschel, T.; Graaf, C. de

    2017-01-01

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools

  17. 3D-e-Chem-VM : Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine

    NARCIS (Netherlands)

    McGuire, Ross; Verhoeven, Stefan; Vass, Márton; Vriend, Gerrit; De Esch, Iwan J P; Lusher, Scott J.; Leurs, Rob; Ridder, Lars; Kooistra, Albert J.; Ritschel, Tina; de Graaf, C.

    2017-01-01

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools

  18. Virtual decoupling flight control via real-time trajectory synthesis and tracking

    Science.gov (United States)

    Zhang, Xuefu

    The production of the General Aviation industry has declined in the past 25 years. Ironically, however, the increasing demand for air travel as a fast, safe, and high-quality mode of transportation has been far from satisfied. Addressing this demand shortfall with personal air transportation necessitates advanced systems for navigation, guidance, control, flight management, and flight traffic control. Among them, an effective decoupling flight control system will not only improve flight quality, safety, and simplicity, and increase air space usage, but also reduce expenses on pilot initial and current training, and thus expand the current market and explore new markets. Because of the formidable difficulties encountered in the actual decoupling of non-linear, time-variant, and highly coupled flight control systems through traditional approaches, a new approach, which essentially converts the decoupling problem into a real-time trajectory synthesis and tracking problem, is employed. Then, the converted problem is solved and a virtual decoupling effect is achieved. In this approach, a trajectory in inertial space can be predefined and dynamically modified based on the flight mission and the pilot's commands. A feedforward-feedback control architecture is constructed to guide the airplane along the trajectory as precisely as possible. Through this approach, the pilot has much simpler, virtually decoupled control of the airplane in terms of speed, flight path angle and horizontal radius of curvature. To verify and evaluate this approach, extensive computer simulation is performed. A great deal of test cases are designed for the flight control under different flight conditions. The simulation results show that our decoupling strategy is satisfactory and promising, and therefore the research can serve as a consolidated foundation for future practical applications.

  19. 3D Mapping for Urban and Regional Planning

    DEFF Research Database (Denmark)

    Bodum, Lars

    2002-01-01

    The process of mapping in 3D for urban and regional planning purposes is not an uncomplicated matter. It involves both the construction of a new data-model and new routines for the geometric modeling of the physical objects. This is due to the fact that most of the documentation until now has been...... registered and georeferenced to the 2D plan. This paper will outline a new method for 3D mapping where new LIDAR (laser-scanning) technology and additional 2D maps with attributes will be combined to create a 3D map of an urban area. The 3D map will afterwards be used in a real-time simulation system (also...... known as Virtual Reality system) for urban and regional planning purposes. This initiative will be implemented in a specific geographic region (North Jutland County in Denmark) by a new research centre at Aalborg University called Centre for 3D GeoInformation. The key question for this research team...

  20. Lead-oriented synthesis: Investigation of organolithium-mediated routes to 3-D scaffolds and 3-D shape analysis of a virtual lead-like library.

    Science.gov (United States)

    Lüthy, Monique; Wheldon, Mary C; Haji-Cheteh, Chehasnah; Atobe, Masakazu; Bond, Paul S; O'Brien, Peter; Hubbard, Roderick E; Fairlamb, Ian J S

    2015-06-01

    Synthetic routes to six 3-D scaffolds containing piperazine, pyrrolidine and piperidine cores have been developed. The synthetic methodology focused on the use of N-Boc α-lithiation-trapping chemistry. Notably, suitably protected and/or functionalised medicinal chemistry building blocks were synthesised via concise, connective methodology. This represents a rare example of lead-oriented synthesis. A virtual library of 190 compounds was then enumerated from the six scaffolds. Of these, 92 compounds (48%) fit the lead-like criteria of: (i) -1⩽AlogP⩽3; (ii) 14⩽number of heavy atoms⩽26; (iii) total polar surface area⩾50Å(2). The 3-D shapes of the 190 compounds were analysed using a triangular plot of normalised principal moments of inertia (PMI). From this, 46 compounds were identified which had lead-like properties and possessed 3-D shapes in under-represented areas of pharmaceutical space. Thus, the PMI analysis of the 190 member virtual library showed that whilst scaffolds which may appear on paper to be 3-D in shape, only 24% of the compounds actually had 3-D structures in the more interesting areas of 3-D drug space. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Real economy versus virtual economy - New challenges for nowadays society

    Directory of Open Access Journals (Sweden)

    Associates Professon Dr. Veronica Adriana Popescu

    2011-05-01

    Full Text Available In the paper Real Economy versus Virtual Economy – New Challenges for Nowadays Society our goal is to present the importance of both real economy and virtual economy.At the begging of our research, we have presented the main views of some specialists concerning both virtual and real economy. After that we have compared the two types of economies and we have stressed the most important aspects connected to them. The main reason why we have decided to approach this complex subject is due to the increasing interest in the virtual economy matters and the relation that this particular type of economy develops with the real economy.

  2. Innovative application of virtual display technique in virtual museum

    Science.gov (United States)

    Zhang, Jiankang

    2017-09-01

    Virtual museum refers to display and simulate the functions of real museum on the Internet in the form of 3 Dimensions virtual reality by applying interactive programs. Based on Virtual Reality Modeling Language, virtual museum building and its effective interaction with the offline museum lie in making full use of 3 Dimensions panorama technique, virtual reality technique and augmented reality technique, and innovatively taking advantages of dynamic environment modeling technique, real-time 3 Dimensions graphics generating technique, system integration technique and other key virtual reality techniques to make sure the overall design of virtual museum.3 Dimensions panorama technique, also known as panoramic photography or virtual reality, is a technique based on static images of the reality. Virtual reality technique is a kind of computer simulation system which can create and experience the interactive 3 Dimensions dynamic visual world. Augmented reality, also known as mixed reality, is a technique which simulates and mixes the information (visual, sound, taste, touch, etc.) that is difficult for human to experience in reality. These technologies make virtual museum come true. It will not only bring better experience and convenience to the public, but also be conducive to improve the influence and cultural functions of the real museum.

  3. 3D virtual facilities with interactive instructions for nuclear education and training

    International Nuclear Information System (INIS)

    Satoh, Yoshinori; Li, Ye; Zhu, Yuefeng; Rizwan-uddin

    2015-01-01

    Efficient and effective education and training of nuclear engineering students and future operators are critical for the safe operation and maintenance of nuclear power plants. Students and future operators used to receive some of the education and training at university laboratories and research reactors. With many university research reactors now shutdown, both students and future operators are deprived of this valuable training source. With an eye toward this need and to take advantage of recent developments in human machine interface technologies, we have focused on the development of 3D virtual laboratories for nuclear engineering education and training as well as to conduct virtual experiments. These virtual laboratories are expected to supplement currently available resources and education and training experiences. Resent focus is on adding interactivity and physics model to allow trainees to conduct virtual experiments. This paper reports some recent extensions to our virtual nuclear education laboratory and research reactor laboratory. These include head mounted display as well as hand tracking devices for virtual operations. (author)

  4. Real-time photorealistic stereoscopic rendering of fire

    Science.gov (United States)

    Rose, Benjamin M.; McAllister, David F.

    2007-02-01

    We propose a method for real-time photorealistic stereo rendering of the natural phenomenon of fire. Applications include the use of virtual reality in fire fighting, military training, and entertainment. Rendering fire in real-time presents a challenge because of the transparency and non-static fluid-like behavior of fire. It is well known that, in general, methods that are effective for monoscopic rendering are not necessarily easily extended to stereo rendering because monoscopic methods often do not provide the depth information necessary to produce the parallax required for binocular disparity in stereoscopic rendering. We investigate the existing techniques used for monoscopic rendering of fire and discuss their suitability for extension to real-time stereo rendering. Methods include the use of precomputed textures, dynamic generation of textures, and rendering models resulting from the approximation of solutions of fluid dynamics equations through the use of ray-tracing algorithms. We have found that in order to attain real-time frame rates, our method based on billboarding is effective. Slicing is used to simulate depth. Texture mapping or 2D images are mapped onto polygons and alpha blending is used to treat transparency. We can use video recordings or prerendered high-quality images of fire as textures to attain photorealistic stereo.

  5. Ray-based approach to integrated 3D visual communication

    Science.gov (United States)

    Naemura, Takeshi; Harashima, Hiroshi

    2001-02-01

    For a high sense of reality in the next-generation communications, it is very important to realize three-dimensional (3D) spatial media, instead of existing 2D image media. In order to comprehensively deal with a variety of 3D visual data formats, the authors first introduce the concept of "Integrated 3D Visual Communication," which reflects the necessity of developing a neutral representation method independent of input/output systems. Then, the following discussions are concentrated on the ray-based approach to this concept, in which any visual sensation is considered to be derived from a set of light rays. This approach is a simple and straightforward to the problem of how to represent 3D space, which is an issue shared by various fields including 3D image communications, computer graphics, and virtual reality. This paper mainly presents the several developments in this approach, including some efficient methods of representing ray data, a real-time video-based rendering system, an interactive rendering system based on the integral photography, a concept of virtual object surface for the compression of tremendous amount of data, and a light ray capturing system using a telecentric lens. Experimental results demonstrate the effectiveness of the proposed techniques.

  6. Virtual reality application for simulating and minimizing worker radiation exposure

    International Nuclear Information System (INIS)

    Kang, Ki Doo; Hajek, Brian K.; Lee, Yon Sik; Shin, Yoo Jin

    2004-01-01

    To plan work and preclude unexpected radiation exposures in a nuclear power plant, a virtual nuclear plant is a good solution. For this, there are prerequisites such as displaying real time radiation exposure data onto an avatar and preventing speed reduction caused by multiple users on the net-based system. The work space is divided into several sections and radiation information is extracted section by section. Based on the simulation algorithm, real time processing is applied to the events and movements of the avatar. Because there are millions of parts in a nuclear power plant, it is almost impossible to model all of them. Several parts of virtual plant have been modeled using 3D internet virtual reality for the model development. Optimum one-click Active-X is applied for the system, which provides easy access to the virtual plant. Connection time on the net is 20-30 sec for initial loading and 3-4 sec for the 2nd and subsequent times

  7. An Interactive 3D Virtual Anatomy Puzzle for Learning and Simulation - Initial Demonstration and Evaluation.

    Science.gov (United States)

    Messier, Erik; Wilcox, Jascha; Dawson-Elli, Alexander; Diaz, Gabriel; Linte, Cristian A

    2016-01-01

    To inspire young students (grades 6-12) to become medical practitioners and biomedical engineers, it is necessary to expose them to key concepts of the field in a way that is both exciting and informative. Recent advances in medical image acquisition, manipulation, processing, visualization, and display have revolutionized the approach in which the human body and internal anatomy can be seen and studied. It is now possible to collect 3D, 4D, and 5D medical images of patient specific data, and display that data to the end user using consumer level 3D stereoscopic display technology. Despite such advancements, traditional 2D modes of content presentation such as textbooks and slides are still the standard didactic equipment used to teach young students anatomy. More sophisticated methods of display can help to elucidate the complex 3D relationships between structures that are so often missed when viewing only 2D media, and can instill in students an appreciation for the interconnection between medicine and technology. Here we describe the design, implementation, and preliminary evaluation of a 3D virtual anatomy puzzle dedicated to helping users learn the anatomy of various organs and systems by manipulating 3D virtual data. The puzzle currently comprises several components of the human anatomy and can be easily extended to include additional organs and systems. The 3D virtual anatomy puzzle game was implemented and piloted using three display paradigms - a traditional 2D monitor, a 3D TV with active shutter glass, and the DK2 version Oculus Rift, as well as two different user interaction devices - a space mouse and traditional keyboard controls.

  8. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging.

    Science.gov (United States)

    Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L

    2008-12-21

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from approximately 0.5 mm for the normal adult breathing pattern to approximately 1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general

  9. Real-time 3D internal marker tracking during arc radiotherapy by the use of combined MV-kV imaging

    International Nuclear Information System (INIS)

    Liu, W; Wiersma, R D; Mao, W; Luxton, G; Xing, L

    2008-01-01

    To minimize the adverse dosimetric effect caused by tumor motion, it is desirable to have real-time knowledge of the tumor position throughout the beam delivery process. A promising technique to realize the real-time image guided scheme in external beam radiation therapy is through the combined use of MV and onboard kV beam imaging. The success of this MV-kV triangulation approach for fixed-gantry radiation therapy has been demonstrated. With the increasing acceptance of modern arc radiotherapy in the clinics, a timely and clinically important question is whether the image guidance strategy can be extended to arc therapy to provide the urgently needed real-time tumor motion information. While conceptually feasible, there are a number of theoretical and practical issues specific to the arc delivery that need to be resolved before clinical implementation. The purpose of this work is to establish a robust procedure of system calibration for combined MV and kV imaging for internal marker tracking during arc delivery and to demonstrate the feasibility and accuracy of the technique. A commercially available LINAC equipped with an onboard kV imager and electronic portal imaging device (EPID) was used for the study. A custom built phantom with multiple ball bearings was used to calibrate the stereoscopic MV-kV imaging system to provide the transformation parameters from imaging pixels to 3D world coordinates. The accuracy of the fiducial tracking system was examined using a 4D motion phantom capable of moving in accordance with a pre-programmed trajectory. Overall, spatial accuracy of MV-kV fiducial tracking during the arc delivery process for normal adult breathing amplitude and period was found to be better than 1 mm. For fast motion, the results depended on the imaging frame rates. The RMS error ranged from ∼0.5 mm for the normal adult breathing pattern to ∼1.5 mm for more extreme cases with a low imaging frame rate of 3.4 Hz. In general, highly accurate real-time

  10. A 3D virtual plant-modelling study : Tillering in spring wheat

    NARCIS (Netherlands)

    Evers, J.B.; Vos, J.

    2007-01-01

    Tillering in wheat (Triticum aestivum L.) is influenced by both light intensity and the ratio between the intensities of red and far-red light. The relationships between canopy architecture, light properties within the canopy, and tillering in spring-wheat plants were studied using a 3D virtual

  11. A point-based rendering approach for real-time interaction on mobile devices

    Institute of Scientific and Technical Information of China (English)

    LIANG XiaoHui; ZHAO QinPing; HE ZhiYing; XIE Ke; LIU YuBo

    2009-01-01

    Mobile device is an Important interactive platform. Due to the limitation of computation, memory, display area and energy, how to realize the efficient and real-time interaction of 3D models based on mobile devices is an important research topic. Considering features of mobile devices, this paper adopts remote rendering mode and point models, and then, proposes a transmission and rendering approach that could interact in real time. First, improved simplification algorithm based on MLS and display resolution of mobile devices is proposed. Then, a hierarchy selection of point models and a QoS transmission control strategy are given based on interest area of operator, interest degree of object in the virtual environment and rendering error. They can save the energy consumption. Finally, the rendering and interaction of point models are completed on mobile devices. The experiments show that our method is efficient.

  12. Options in virtual 3D, optical-impression-based planning of dental implants.

    Science.gov (United States)

    Reich, Sven; Kern, Thomas; Ritter, Lutz

    2014-01-01

    If a 3D radiograph, which in today's dentistry often consists of a CBCT dataset, is available for computerized implant planning, the 3D planning should also consider functional prosthetic aspects. In a conventional workflow, the CBCT is done with a specially produced radiopaque prosthetic setup that makes the desired prosthetic situation visible during virtual implant planning. If an exclusively digital workflow is chosen, intraoral digital impressions are taken. On these digital models, the desired prosthetic suprastructures are designed. The entire datasets are virtually superimposed by a "registration" process on the corresponding structures (teeth) in the CBCTs. Thus, both the osseous and prosthetic structures are visible in one single 3D application and make it possible to consider surgical and prosthetic aspects. After having determined the implant positions on the computer screen, a drilling template is designed digitally. According to this design (CAD), a template is printed or milled in CAM process. This template is the first physically extant product in the entire workflow. The article discusses the options and limitations of this workflow.

  13. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology.

    Science.gov (United States)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph; Figl, Michael; Pawiro, Supriyanto Ardjo; Furtado, Hugo; Fabri, Daniella; Bloch, Christoph; Bergmann, Helmar; Gröller, Eduard; Birkfellner, Wolfgang

    2012-02-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference x-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512×512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. Copyright © 2011. Published by Elsevier GmbH.

  14. High-performance GPU-based rendering for real-time, rigid 2D/3D-image registration and motion prediction in radiation oncology

    Energy Technology Data Exchange (ETDEWEB)

    Spoerk, Jakob; Gendrin, Christelle; Weber, Christoph [Medical University of Vienna (Austria). Center of Medical Physics and Biomedical Engineering] [and others

    2012-07-01

    A common problem in image-guided radiation therapy (IGRT) of lung cancer as well as other malignant diseases is the compensation of periodic and aperiodic motion during dose delivery. Modern systems for image-guided radiation oncology allow for the acquisition of cone-beam computed tomography data in the treatment room as well as the acquisition of planar radiographs during the treatment. A mid-term research goal is the compensation of tumor target volume motion by 2D/3D Registration. In 2D/3D registration, spatial information on organ location is derived by an iterative comparison of perspective volume renderings, so-called digitally rendered radiographs (DRR) from computed tomography volume data, and planar reference X-rays. Currently, this rendering process is very time consuming, and real-time registration, which should at least provide data on organ position in less than a second, has not come into existence. We present two GPU-based rendering algorithms which generate a DRR of 512 x 512 pixels size from a CT dataset of 53 MB size at a pace of almost 100 Hz. This rendering rate is feasible by applying a number of algorithmic simplifications which range from alternative volume-driven rendering approaches - namely so-called wobbled splatting - to sub-sampling of the DRR-image by means of specialized raycasting techniques. Furthermore, general purpose graphics processing unit (GPGPU) programming paradigms were consequently utilized. Rendering quality and performance as well as the influence on the quality and performance of the overall registration process were measured and analyzed in detail. The results show that both methods are competitive and pave the way for fast motion compensation by rigid and possibly even non-rigid 2D/3D registration and, beyond that, adaptive filtering of motion models in IGRT. (orig.)

  15. Scene data fusion: Real-time standoff volumetric gamma-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Barnowski, Ross [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720, United States of America (United States); Haefner, Andrew; Mihailescu, Lucian [Lawrence Berkeley National Lab - Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720, United States of America (United States); Vetter, Kai [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720, United States of America (United States); Lawrence Berkeley National Lab - Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720, United States of America (United States)

    2015-11-11

    An approach to gamma-ray imaging has been developed that enables near real-time volumetric (3D) imaging of unknown environments thus improving the utility of gamma-ray imaging for source-search and radiation mapping applications. The approach, herein dubbed scene data fusion (SDF), is based on integrating mobile radiation imagers with real-time tracking and scene reconstruction algorithms to enable a mobile mode of operation and 3D localization of gamma-ray sources. A 3D model of the scene, provided in real-time by a simultaneous localization and mapping (SLAM) algorithm, is incorporated into the image reconstruction reducing the reconstruction time and improving imaging performance. The SDF concept is demonstrated in this work with a Microsoft Kinect RGB-D sensor, a real-time SLAM solver, and a cart-based Compton imaging platform comprised of two 3D position-sensitive high purity germanium (HPGe) detectors. An iterative algorithm based on Compton kinematics is used to reconstruct the gamma-ray source distribution in all three spatial dimensions. SDF advances the real-world applicability of gamma-ray imaging for many search, mapping, and verification scenarios by improving the tractiblity of the gamma-ray image reconstruction and providing context for the 3D localization of gamma-ray sources within the environment in real-time.

  16. 3D for Geosciences: Interactive Tangibles and Virtual Models

    Science.gov (United States)

    Pippin, J. E.; Matheney, M.; Kitsch, N.; Rosado, G.; Thompson, Z.; Pierce, S. A.

    2016-12-01

    Point cloud processing provides a method of studying and modelling geologic features relevant to geoscience systems and processes. Here, software including Skanect, MeshLab, Blender, PDAL, and PCL are used in conjunction with 3D scanning hardware, including a Structure scanner and a Kinect camera, to create and analyze point cloud images of small scale topography, karst features, tunnels, and structures at high resolution. This project successfully scanned internal karst features ranging from small stalactites to large rooms, as well as an external waterfall feature. For comparison purposes, multiple scans of the same object were merged into single object files both automatically, using commercial software, and manually using open source libraries and code. Files with format .ply were manually converted into numeric data sets to be analyzed for similar regions between files in order to match them together. We can assume a numeric process would be more powerful and efficient than the manual method, however it could lack other useful features that GUI's may have. The digital models have applications in mining as efficient means of replacing topography functions such as measuring distances and areas. Additionally, it is possible to make simulation models such as drilling templates and calculations related to 3D spaces. Advantages of using methods described here for these procedures include the relatively quick time to obtain data and the easy transport of the equipment. With regard to openpit mining, obtaining 3D images of large surfaces and with precision would be a high value tool by georeferencing scan data to interactive maps. The digital 3D images obtained from scans may be saved as printable files to create physical 3D-printable models to create tangible objects based on scientific information, as well as digital "worlds" able to be navigated virtually. The data, models, and algorithms explored here can be used to convey complex scientific ideas to a range of

  17. Virtual 3D bladder reconstruction for augmented medical records from white light cystoscopy (Conference Presentation)

    Science.gov (United States)

    Lurie, Kristen L.; Zlatev, Dimitar V.; Angst, Roland; Liao, Joseph C.; Ellerbee, Audrey K.

    2016-02-01

    Bladder cancer has a high recurrence rate that necessitates lifelong surveillance to detect mucosal lesions. Examination with white light cystoscopy (WLC), the standard of care, is inherently subjective and data storage limited to clinical notes, diagrams, and still images. A visual history of the bladder wall can enhance clinical and surgical management. To address this clinical need, we developed a tool to transform in vivo WLC videos into virtual 3-dimensional (3D) bladder models using advanced computer vision techniques. WLC videos from rigid cystoscopies (1280 x 720 pixels) were recorded at 30 Hz followed by immediate camera calibration to control for image distortions. Video data were fed into an automated structure-from-motion algorithm that generated a 3D point cloud followed by a 3D mesh to approximate the bladder surface. The highest quality cystoscopic images were projected onto the approximated bladder surface to generate a virtual 3D bladder reconstruction. In intraoperative WLC videos from 36 patients undergoing transurethral resection of suspected bladder tumors, optimal reconstruction was achieved from frames depicting well-focused vasculature, when the bladder was maintained at constant volume with minimal debris, and when regions of the bladder wall were imaged multiple times. A significant innovation of this work is the ability to perform the reconstruction using video from a clinical procedure collected with standard equipment, thereby facilitating rapid clinical translation, application to other forms of endoscopy and new opportunities for longitudinal studies of cancer recurrence.

  18. The virtual nose: a 3-dimensional virtual reality model of the human nose.

    Science.gov (United States)

    Vartanian, A John; Holcomb, Joi; Ai, Zhuming; Rasmussen, Mary; Tardy, M Eugene; Thomas, J Regan

    2004-01-01

    The 3-dimensionally complex interplay of soft tissue, cartilaginous, and bony elements makes the mastery of nasal anatomy difficult. Conventional methods of learning nasal anatomy exist, but they often involve a steep learning curve. Computerized models and virtual reality applications have been used to facilitate teaching in a number of other complex anatomical regions, such as the human temporal bone and pelvic floor. We present a 3-dimensional (3-D) virtual reality model of the human nose. Human cadaveric axial cross-sectional (0.33-mm cuts) photographic data of the head and neck were used. With 460 digitized images, individual structures were traced and programmed to create a computerized polygonal model of the nose. Further refinements to this model were made using a number of specialized computer programs. This 3-D computer model of the nose was then programmed to operate as a virtual reality model. Anatomically correct 3-D model of the nose was produced. High-resolution images of the "virtual nose" demonstrate the nasal septum, lower lateral cartilages, middle vault, bony dorsum, and other structural details of the nose. Also, the model can be combined with a separate virtual reality model of the face and its skin cover as well as the skull. The user can manipulate the model in space, examine 3-D anatomical relationships, and fade superficial structures to reveal deeper ones. The virtual nose is a 3-D virtual reality model of the nose that is accurate and easy to use. It can be run on a personal computer or in a specialized virtual reality environment. It can serve as an effective teaching tool. As the first virtual reality model of the nose, it establishes a virtual reality platform from which future applications can be launched.

  19. Three-dimensional modeling and virtual TRIGA reconfigure for specialized training; Modelado 3D y TRIGA virtual reconfigurable para entrenamiento especializado

    Energy Technology Data Exchange (ETDEWEB)

    Plata M, A. C.; Morales S, J. B.; Flores, M. [Facultad de Ingenieria, Division de Estudios de Posgrado, Campus Morelos, UNAM, Paseo Cuauhnahuac 8532, Col. Progreso, 62550 Jiutepec, Morelos (Mexico)], e-mail: yoyuclof@hotmail.com

    2009-10-15

    The news products that have been realized for the training virtual room which is developing in the Engineering Faculty of National Autonomous University of Mexico are presented. These improvements are mainly in modeling of virtual reality of the reactor building, as well as internal parts of reactor. It was modified the dynamic modeling of control rods of reaction in chain and included new elements to reactor. which exist not necessarily in all the TRIGA, but that, for educational purposes are highly useful. Such is the case of addition of valves, pumps, tanks, injection lines of light or borated water, as well as a heat exchanger, with it can recycle only pool water from side to other, or to extract energy toward a secondary controller from the operator console. The models of heat decay were included, of subcooled and nucleated boiling of coolant-moderator in the core, the dynamics of xenon and samarium. These last with independent multipliers of simulation time to allow variations very fast that real time. All these additions modify the coolant-moderator characteristics and consequently the answer of simulator. The controls are separated in: an operator console (student) very similar to the real systems, another of instructor that has additional access to parameters not directly measurement in the facilities but that allow to modify the system to illustrate another not easily possible effects in the real system. The traveling crane is also modeled and is controlled in a third console from where can to replacement to reactor as well as to add or to replacement: intakes and discharges of coolant circulators, measuring instruments, reflectors and neutron sources. The dynamic models have been tested in SCILAB and SCICOS. At present is working in the integration of the dynamic simulator and the virtual reality mainly with the design requirement of allowing functions of increased reality. (Author)

  20. Illustrative visualization of 3D city models

    Science.gov (United States)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  1. An inexpensive underwater mine countermeasures simulator with real-time 3D after action review

    Directory of Open Access Journals (Sweden)

    Robert Stone

    2016-10-01

    Full Text Available This paper presents the results of a concept capability demonstration pilot study, the aim of which was to investigate how inexpensive gaming software and hardware technologies could be exploited in the development and evaluation of a simulator prototype for training Royal Navy mine clearance divers, specifically focusing on the detection and accurate reporting of the location and condition of underwater ordnance. The simulator was constructed using the Blender open source 3D modelling toolkit and game engine, and featured not only an interactive 3D editor for underwater scenario generation by instructors, but also a real-time, 3D After Action Review (AAR system for formative assessment and feedback. The simulated scenarios and AAR architecture were based on early human factors observations and briefings conducted at the UK's Defence Diving School (DDS, an organisation that provides basic military diving training for all Royal Navy and Army (Royal Engineers divers. An experimental pilot study was undertaken to determine whether or not basic navigational and mine detection components of diver performance could be improved as a result of exposing participants to the AAR system, delivered between simulated diving scenarios. The results suggest that the provision of AAR was accompanied by significant performance improvements in the positive identification of simulated underwater ordnance (in contrast to non-ordnance objects and on participants' description of their location, their immediate in-water or seabed context and their structural condition. Only marginal improvements were found with participants' navigational performance in terms of their deviation accuracies from a pre-programmed expert search path. Overall, this project contributes to the growing corpus of evidence supporting the development of simulators that demonstrate the value of exploiting open source gaming software and the significance of adopting established games design

  2. 3D-ANTLERS: Virtual Reconstruction and Three-Dimensional Measurement

    Science.gov (United States)

    Barba, S.; Fiorillo, F.; De Feo, E.

    2013-02-01

    . In the ARTEC digital mock-up for example, it shows the ability to select the individual frames, already polygonal and geo-referenced at the time of capture; however, it is not possible to make an automated texturization differently from the low-cost environment which allows to produce a good graphics' definition. Once the final 3D models were obtained, we have proceeded to do a geometric and graphic comparison of the results. Therefore, in order to provide an accuracy requirement and an assessment for the 3D reconstruction we have taken into account the following benchmarks: cost, captured points, noise (local and global), shadows and holes, operability, degree of definition, quality and accuracy. Subsequently, these studies carried out in an empirical way on the virtual reconstructions, a 3D documentation was codified with a procedural method endorsing the use of terrestrial sensors for the documentation of antlers. The results thus pursued were compared with the standards set by the current provisions (see "Manual de medición" of Government of Andalusia-Spain); to date, in fact, the identification is based on data such as length, volume, colour, texture, openness, tips, structure, etc. Data, which is currently only appreciated with traditional instruments, such as tape measure, would be well represented by a process of virtual reconstruction and cataloguing.

  3. i3Drive, a 3D interactive driving simulator.

    Science.gov (United States)

    Ambroz, Miha; Prebil, Ivan

    2010-01-01

    i3Drive, a wheeled-vehicle simulator, can accurately simulate vehicles of various configurations with up to eight wheels in real time on a desktop PC. It presents the vehicle dynamics as an interactive animation in a virtual 3D environment. The application is fully GUI-controlled, giving users an easy overview of the simulation parameters and letting them adjust those parameters interactively. It models all relevant vehicle systems, including the mechanical models of the suspension, power train, and braking and steering systems. The simulation results generally correspond well with actual measurements, making the system useful for studying vehicle performance in various driving scenarios. i3Drive is thus a worthy complement to other, more complex tools for vehicle-dynamics simulation and analysis.

  4. Automated reconstruction of 3D models from real environments

    Science.gov (United States)

    Sequeira, V.; Ng, K.; Wolfart, E.; Gonçalves, J. G. M.; Hogg, D.

    This paper describes an integrated approach to the construction of textured 3D scene models of building interiors from laser range data and visual images. This approach has been implemented in a collection of algorithms and sensors within a prototype device for 3D reconstruction, known as the EST (Environmental Sensor for Telepresence). The EST can take the form of a push trolley or of an autonomous mobile platform. The Autonomous EST (AEST) has been designed to provide an integrated solution for automating the creation of complete models. Embedded software performs several functions, including triangulation of the range data, registration of video texture, registration and integration of data acquired from different capture points. Potential applications include facilities management for the construction industry and creating reality models to be used in general areas of virtual reality, for example, virtual studios, virtualised reality for content-related applications (e.g., CD-ROMs), social telepresence, architecture and others. The paper presents the main components of the EST/AEST, and presents some example results obtained from the prototypes. The reconstructed model is encoded in VRML format so that it is possible to access and view the model via the World Wide Web.

  5. Timing of three-dimensional virtual treatment planning of orthognathic surgery: a prospective single-surgeon evaluation on 350 consecutive cases.

    Science.gov (United States)

    Swennen, Gwen R J

    2014-11-01

    The purpose of this article is to evaluate the timing for three-dimensional (3D) virtual treatment planning of orthognathic surgery in the daily clinical routine. A total of 350 consecutive patients were included in this study. All patients were scanned following the standardized "Triple CBCT Scan Protocol" in centric relation. Integrated 3D virtual planning and actual surgery were performed by the same surgeon in all patients. Although clinically acceptable, still software improvements especially toward 3D virtual occlusal definition are mandatory to make 3D virtual planning of orthognathic surgery less time-consuming and more user-friendly to the clinician. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. 3D visualization based customer experiences of nuclear plant control room

    International Nuclear Information System (INIS)

    Sun Tienlung; Chou Chinmei; Hung Tamin; Cheng Tsungchieh; Yang Chihwei; Yang Lichen

    2011-01-01

    This paper employs virtual reality (VR) technology to develop an interactive virtual nuclear plant control room in which the general public could easily walk into the 'red zone' and play with the control buttons. The VR-based approach allows deeper and richer customer experiences that the real nuclear plant control room could not offer. When people know more about the serious process control procedures enforced in the nuclear plant control room, they will appropriate more about the safety efforts imposed by the nuclear plant and become more comfortable about the nuclear plant. The virtual nuclear plant control room is built using a 3D game development tool called Unity3D. The 3D scene is connected to a nuclear plant simulation system through Windows API programs. To evaluate the usability of the virtual control room, an experiment will be conducted to see how much 'immersion' the users could feel when they played with the virtual control room. (author)

  7. 3D Boolean operations in virtual surgical planning.

    Science.gov (United States)

    Charton, Jerome; Laurentjoye, Mathieu; Kim, Youngjun

    2017-10-01

    Boolean operations in computer-aided design or computer graphics are a set of operations (e.g. intersection, union, subtraction) between two objects (e.g. a patient model and an implant model) that are important in performing accurate and reproducible virtual surgical planning. This requires accurate and robust techniques that can handle various types of data, such as a surface extracted from volumetric data, synthetic models, and 3D scan data. This article compares the performance of the proposed method (Boolean operations by a robust, exact, and simple method between two colliding shells (BORES)) and an existing method based on the Visualization Toolkit (VTK). In all tests presented in this article, BORES could handle complex configurations as well as report impossible configurations of the input. In contrast, the VTK implementations were unstable, do not deal with singular edges and coplanar collisions, and have created several defects. The proposed method of Boolean operations, BORES, is efficient and appropriate for virtual surgical planning. Moreover, it is simple and easy to implement. In future work, we will extend the proposed method to handle non-colliding components.

  8. Putting 3D modelling and 3D printing into practice: virtual surgery and preoperative planning to reconstruct complex post-traumatic skeletal deformities and defects

    Directory of Open Access Journals (Sweden)

    Tetsworth Kevin

    2017-01-01

    Full Text Available 3D printing technology has revolutionized and gradually transformed manufacturing across a broad spectrum of industries, including healthcare. Nowhere is this more apparent than in orthopaedics with many surgeons already incorporating aspects of 3D modelling and virtual procedures into their routine clinical practice. As a more extreme application, patient-specific 3D printed titanium truss cages represent a novel approach for managing the challenge of segmental bone defects. This review illustrates the potential indications of this innovative technique using 3D printed titanium truss cages in conjunction with the Masquelet technique. These implants are custom designed during a virtual surgical planning session with the combined input of an orthopaedic surgeon, an orthopaedic engineering professional and a biomedical design engineer. The ability to 3D model an identical replica of the original intact bone in a virtual procedure is of vital importance when attempting to precisely reconstruct normal anatomy during the actual procedure. Additionally, other important factors must be considered during the planning procedure, such as the three-dimensional configuration of the implant. Meticulous design is necessary to allow for successful implantation through the planned surgical exposure, while being aware of the constraints imposed by local anatomy and prior implants. This review will attempt to synthesize the current state of the art as well as discuss our personal experience using this promising technique. It will address implant design considerations including the mechanical, anatomical and functional aspects unique to each case.

  9. Sculpting 3D worlds with music: advanced texturing techniques

    Science.gov (United States)

    Greuel, Christian; Bolas, Mark T.; Bolas, Niko; McDowall, Ian E.

    1996-04-01

    Sound within the virtual environment is often considered to be secondary to the graphics. In a typical scenario, either audio cues are locally associated with specific 3D objects or a general aural ambiance is supplied in order to alleviate the sterility of an artificial experience. This paper discusses a completely different approach, in which cues are extracted from live or recorded music in order to create geometry and control object behaviors within a computer- generated environment. Advanced texturing techniques used to generate complex stereoscopic images are also discussed. By analyzing music for standard audio characteristics such as rhythm and frequency, information is extracted and repackaged for processing. With the Soundsculpt Toolkit, this data is mapped onto individual objects within the virtual environment, along with one or more predetermined behaviors. Mapping decisions are implemented with a user definable schedule and are based on the aesthetic requirements of directors and designers. This provides for visually active, immersive environments in which virtual objects behave in real-time correlation with the music. The resulting music-driven virtual reality opens up several possibilities for new types of artistic and entertainment experiences, such as fully immersive 3D `music videos' and interactive landscapes for live performance.

  10. APPROACH TO CONSTRUCTING 3D VIRTUAL SCENE OF IRRIGATION AREA USING MULTI-SOURCE DATA

    Directory of Open Access Journals (Sweden)

    S. Cheng

    2015-10-01

    Full Text Available For an irrigation area that is often complicated by various 3D artificial ground features and natural environment, disadvantages of traditional 2D GIS in spatial data representation, management, query, analysis and visualization is becoming more and more evident. Building a more realistic 3D virtual scene is thus especially urgent for irrigation area managers and decision makers, so that they can carry out various irrigational operations lively and intuitively. Based on previous researchers' achievements, a simple, practical and cost-effective approach was proposed in this study, by adopting3D geographic information system (3D GIS, remote sensing (RS technology. Based on multi-source data such as Google Earth (GE high-resolution remote sensing image, ASTER G-DEM, hydrological facility maps and so on, 3D terrain model and ground feature models were created interactively. Both of the models were then rendered with texture data and integrated under ArcGIS platform. A vivid, realistic 3D virtual scene of irrigation area that has a good visual effect and possesses primary GIS functions about data query and analysis was constructed.Yet, there is still a long way to go for establishing a true 3D GIS for the irrigation are: issues of this study were deeply discussed and future research direction was pointed out in the end of the paper.

  11. Virtual real-time inspection of nuclear material via VRML and secure web pages

    International Nuclear Information System (INIS)

    Nilsen, C.; Jortner, J.; Damico, J.; Friesen, J.; Schwegel, J.

    1996-01-01

    Sandia National Laboratories'' Straight-Line project is working to provide the right sensor information to the right user to enhance the safety, security, and international accountability of nuclear material. One of Straight-Line''s efforts is to create a system to securely disseminate this data on the Internet''s World-Wide-Web. To make the user interface more intuitive, Sandia has generated a three dimensional VRML (virtual reality modeling language) interface for a secure web page. This paper will discuss the implementation of the Straight-Line secure 3-D web page. A discussion of the pros and cons of a 3-D web page is also presented. The public VRML demonstration described in this paper can be found on the Internet at this address, http://www.ca.sandia.gov/NMM/. A Netscape browser, version 3 is strongly recommended

  12. Virtual real-time inspection of nuclear material via VRML and secure web pages

    International Nuclear Information System (INIS)

    Nilsen, C.; Jortner, J.; Damico, J.; Friesen, J.; Schwegel, J.

    1997-04-01

    Sandia National Laboratories' Straight Line project is working to provide the right sensor information to the right user to enhance the safety, security, and international accountability of nuclear material. One of Straight Line's efforts is to create a system to securely disseminate this data on the Internet's World-Wide-Web. To make the user interface more intuitive, Sandia has generated a three dimensional VRML (virtual reality modeling language) interface for a secure web page. This paper will discuss the implementation of the Straight Line secure 3-D web page. A discussion of the ''pros and cons'' of a 3-D web page is also presented. The public VRML demonstration described in this paper can be found on the Internet at the following address: http://www.ca.sandia.gov/NMM/. A Netscape browser, version 3 is strongly recommended

  13. Reconstruction of 3d video from 2d real-life sequences

    Directory of Open Access Journals (Sweden)

    Eduardo Ramos Diaz

    2010-01-01

    Full Text Available En este artículo, se propone un método novedoso que permite generar secuencias de video en 3D usando secuencias de video reales en 2D. La reconstrucción de la secuencia de video en 3D se realiza usando el cálculo del mapa de profundidad y la síntesis de anaglifos. El mapa de profundidad es formado usando la técnica de correspondencia estéreo basada en la minimización de la energía de error global a partir de funciones de suavizado. La construcción del anaglifo es realizada usando la alineación del componente de color interpolándolo con el mapa de profundidad previamente formado. Adicionalmente, se emplea la transformación del mapa de profundidad para reducir el rango dinámico de los valores de disparidad, minimizando el efecto fantasma mejorando la preservación de color. Se usaron numerosas secuencias de video a color reales que contienen diferentes tipos de movimientos como traslacional, rotacional, acercamiento, y la combinación de los anteriores, demostrando buen funcionamiento visual de la reconstrucción de secuencias de video en 3D propuesta.

  14. Image-Based Virtual Tours and 3d Modeling of Past and Current Ages for the Enhancement of Archaeological Parks: the Visualversilia 3d Project

    Science.gov (United States)

    Castagnetti, C.; Giannini, M.; Rivola, R.

    2017-05-01

    The research project VisualVersilia 3D aims at offering a new way to promote the territory and its heritage by matching the traditional reading of the document and the potential use of modern communication technologies for the cultural tourism. Recently, the research on the use of new technologies applied to cultural heritage have turned their attention mainly to technologies to reconstruct and narrate the complexity of the territory and its heritage, including 3D scanning, 3D printing and augmented reality. Some museums and archaeological sites already exploit the potential of digital tools to preserve and spread their heritage but interactive services involving tourists in an immersive and more modern experience are still rare. The innovation of the project consists in the development of a methodology for documenting current and past historical ages and integrating their 3D visualizations with rendering capable of returning an immersive virtual reality for a successful enhancement of the heritage. The project implements the methodology in the archaeological complex of Massaciuccoli, one of the best preserved roman site of the Versilia Area (Tuscany, Italy). The activities of the project briefly consist in developing: 1. the virtual tour of the site in its current configuration on the basis of spherical images then enhanced by texts, graphics and audio guides in order to enable both an immersive and remote tourist experience; 2. 3D reconstruction of the evidences and buildings in their current condition for documentation and conservation purposes on the basis of a complete metric survey carried out through laser scanning; 3. 3D virtual reconstructions through the main historical periods on the basis of historical investigation and the analysis of data acquired.

  15. In Vivo Real Time Volumetric Synthetic Aperture Ultrasound Imaging

    DEFF Research Database (Denmark)

    Bouzari, Hamed; Rasmussen, Morten Fischer; Brandt, Andreas Hjelm

    2015-01-01

    Synthetic aperture (SA) imaging can be used to achieve real-time volumetric ultrasound imaging using 2-D array transducers. The sensitivity of SA imaging is improved by maximizing the acoustic output, but one must consider the limitations of an ultrasound system, both technical and biological....... This paper investigates the in vivo applicability and sensitivity of volumetric SA imaging. Utilizing the transmit events to generate a set of virtual point sources, a frame rate of 25 Hz for a 90° x 90° field-of-view was achieved. Data were obtained using a 3.5 MHz 32 x 32 elements 2-D phased array...... transducer connected to the experimental scanner (SARUS). Proper scaling is applied to the excitation signal such that intensity levels are in compliance with the U.S. Food and Drug Administration regulations for in vivo ultrasound imaging. The measured Mechanical Index and spatial-peak- temporal...

  16. 3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices

    Science.gov (United States)

    Gonizzi Barsanti, S.; Caruso, G.; Micoli, L. L.; Covarrubias Rodriguez, M.; Guidi, G.

    2015-08-01

    Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the "path of the dead", an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.

  17. 3D Visualization of Cultural Heritage Artefacts with Virtual Reality devices

    Directory of Open Access Journals (Sweden)

    S. Gonizzi Barsanti

    2015-08-01

    Full Text Available Although 3D models are useful to preserve the information about historical artefacts, the potential of these digital contents are not fully accomplished until they are not used to interactively communicate their significance to non-specialists. Starting from this consideration, a new way to provide museum visitors with more information was investigated. The research is aimed at valorising and making more accessible the Egyptian funeral objects exhibited in the Sforza Castle in Milan. The results of the research will be used for the renewal of the current exhibition, at the Archaeological Museum in Milan, by making it more attractive. A 3D virtual interactive scenario regarding the “path of the dead”, an important ritual in ancient Egypt, was realized to augment the experience and the comprehension of the public through interactivity. Four important artefacts were considered for this scope: two ushabty, a wooden sarcophagus and a heart scarab. The scenario was realized by integrating low-cost Virtual Reality technologies, as the Oculus Rift DK2 and the Leap Motion controller, and implementing a specific software by using Unity. The 3D models were implemented by adding responsive points of interest in relation to important symbols or features of the artefact. This allows highlighting single parts of the artefact in order to better identify the hieroglyphs and provide their translation. The paper describes the process for optimizing the 3D models, the implementation of the interactive scenario and the results of some test that have been carried out in the lab.

  18. Tourism in Real and Virtual Space

    Directory of Open Access Journals (Sweden)

    Alireza Dehghan

    2009-07-01

    Full Text Available During these two decades, according to the expansion of communication, there is a deep transformation in individuals’ conception of space. As space plays an important role in tourism, either real or virtual, this transformation happens in the field too. The present study attempts to show how tourism in the contemporary virtualized world, or as some authors name: the dual globalized situation, occurs.

  19. 3D Surgical Simulation

    Science.gov (United States)

    Cevidanes, Lucia; Tucker, Scott; Styner, Martin; Kim, Hyungmin; Chapuis, Jonas; Reyes, Mauricio; Proffit, William; Turvey, Timothy; Jaskolka, Michael

    2009-01-01

    This paper discusses the development of methods for computer-aided jaw surgery. Computer-aided jaw surgery allows us to incorporate the high level of precision necessary for transferring virtual plans into the operating room. We also present a complete computer-aided surgery (CAS) system developed in close collaboration with surgeons. Surgery planning and simulation include construction of 3D surface models from Cone-beam CT (CBCT), dynamic cephalometry, semi-automatic mirroring, interactive cutting of bone and bony segment repositioning. A virtual setup can be used to manufacture positioning splints for intra-operative guidance. The system provides further intra-operative assistance with the help of a computer display showing jaw positions and 3D positioning guides updated in real-time during the surgical procedure. The CAS system aids in dealing with complex cases with benefits for the patient, with surgical practice, and for orthodontic finishing. Advanced software tools for diagnosis and treatment planning allow preparation of detailed operative plans, osteotomy repositioning, bone reconstructions, surgical resident training and assessing the difficulties of the surgical procedures prior to the surgery. CAS has the potential to make the elaboration of the surgical plan a more flexible process, increase the level of detail and accuracy of the plan, yield higher operative precision and control, and enhance documentation of cases. Supported by NIDCR DE017727, and DE018962 PMID:20816308

  20. Emotional effects of shooting activities : 'real' versus 'virtual' actions and targets

    NARCIS (Netherlands)

    Rauterberg, G.W.M.; Marinelli, D.

    2003-01-01

    The results of an empirical study are presented to investigate the relationship between different action types (real versus virtual shooting) and different target types (real versus virtual targets) on the actual emotional state (wellbeing)of the player. The results show significantly that virtual

  1. Realistic Real-Time Outdoor Rendering in Augmented Reality

    Science.gov (United States)

    Kolivand, Hoshang; Sunar, Mohd Shahrizal

    2014-01-01

    Realistic rendering techniques of outdoor Augmented Reality (AR) has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps). Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems. PMID:25268480

  2. Realistic real-time outdoor rendering in augmented reality.

    Directory of Open Access Journals (Sweden)

    Hoshang Kolivand

    Full Text Available Realistic rendering techniques of outdoor Augmented Reality (AR has been an attractive topic since the last two decades considering the sizeable amount of publications in computer graphics. Realistic virtual objects in outdoor rendering AR systems require sophisticated effects such as: shadows, daylight and interactions between sky colours and virtual as well as real objects. A few realistic rendering techniques have been designed to overcome this obstacle, most of which are related to non real-time rendering. However, the problem still remains, especially in outdoor rendering. This paper proposed a much newer, unique technique to achieve realistic real-time outdoor rendering, while taking into account the interaction between sky colours and objects in AR systems with respect to shadows in any specific location, date and time. This approach involves three main phases, which cover different outdoor AR rendering requirements. Firstly, sky colour was generated with respect to the position of the sun. Second step involves the shadow generation algorithm, Z-Partitioning: Gaussian and Fog Shadow Maps (Z-GaF Shadow Maps. Lastly, a technique to integrate sky colours and shadows through its effects on virtual objects in the AR system, is introduced. The experimental results reveal that the proposed technique has significantly improved the realism of real-time outdoor AR rendering, thus solving the problem of realistic AR systems.

  3. Real-Time Hand Posture Recognition Using a Range Camera

    Science.gov (United States)

    Lahamy, Herve

    The basic goal of human computer interaction is to improve the interaction between users and computers by making computers more usable and receptive to the user's needs. Within this context, the use of hand postures in replacement of traditional devices such as keyboards, mice and joysticks is being explored by many researchers. The goal is to interpret human postures via mathematical algorithms. Hand posture recognition has gained popularity in recent years, and could become the future tool for humans to interact with computers or virtual environments. An exhaustive description of the frequently used methods available in literature for hand posture recognition is provided. It focuses on the different types of sensors and data used, the segmentation and tracking methods, the features used to represent the hand postures as well as the classifiers considered in the recognition process. Those methods are usually presented as highly robust with a recognition rate close to 100%. However, a couple of critical points necessary for a successful real-time hand posture recognition system require major improvement. Those points include the features used to represent the hand segment, the number of postures simultaneously recognizable, the invariance of the features with respect to rotation, translation and scale and also the behavior of the classifiers against non-perfect hand segments for example segments including part of the arm or missing part of the palm. A 3D time-of-flight camera named SR4000 has been chosen to develop a new methodology because of its capability to provide in real-time and at high frame rate 3D information on the scene imaged. This sensor has been described and evaluated for its capability for capturing in real-time a moving hand. A new recognition method that uses the 3D information provided by the range camera to recognize hand postures has been proposed. The different steps of this methodology including the segmentation, the tracking, the hand

  4. Methodology for Time-Domain Estimation of Storm-Time Electric Fields Using the 3D Earth Impedance

    Science.gov (United States)

    Kelbert, A.; Balch, C. C.; Pulkkinen, A. A.; Egbert, G. D.; Love, J. J.; Rigler, E. J.; Fujii, I.

    2016-12-01

    Magnetic storms can induce geoelectric fields in the Earth's electrically conducting interior, interfering with the operations of electric-power grid industry. The ability to estimate these electric fields at Earth's surface in close to real-time and to provide local short-term predictions would improve the ability of the industry to protect their operations. At any given time, the electric field at the Earth's surface is a function of the time-variant magnetic activity (driven by the solar wind), and the local electrical conductivity structure of the Earth's crust and mantle. For this reason, implementation of an operational electric field estimation service requires an interdisciplinary, collaborative effort between space science, real-time space weather operations, and solid Earth geophysics. We highlight in this talk an ongoing collaboration between USGS, NOAA, NASA, Oregon State University, and the Japan Meteorological Agency, to develop algorithms that can be used for scenario analyses and which might be implemented in a real-time, operational setting. We discuss the development of a time domain algorithm that employs discrete time domain representation of the impedance tensor for a realistic 3D Earth, known as the discrete time impulse response (DTIR), convolved with the local magnetic field time series, to estimate the local electric field disturbances. The algorithm is validated against measured storm-time electric field data collected in the United States and Japan. We also discuss our plans for operational real-time electric field estimation using 3D Earth impedances.

  5. Reduced Mental Load in Learning a Motor Visual Task with Virtual 3D Method

    Science.gov (United States)

    Dan, A.; Reiner, M.

    2018-01-01

    Distance learning is expanding rapidly, fueled by the novel technologies for shared recorded teaching sessions on the Web. Here, we ask whether 3D stereoscopic (3DS) virtual learning environment teaching sessions are more compelling than typical two-dimensional (2D) video sessions and whether this type of teaching results in superior learning. The…

  6. Alleviating the water scarcity in the North China Plain: the role of virtual water and real water transfer

    Science.gov (United States)

    Zhang, Zhuoying; Yang, Hong; Shi, Minjun

    2016-04-01

    The North China Plain is the most water scarce region in China. Its water security is closely relevant to interregional water movement, which can be realized by real water transfers and/or virtual water transfers. This study investigates the roles of virtual water trade and real water transfer using Interregional Input-Output model. The results show that the region is receiving 19.4 billion m3/year of virtual water from the interregional trade, while exporting 16.4 billion m3/year of virtual water in the international trade. In balance, the region has a net virtual water gain of 3 billion m3/year from outside. Its virtual water inflow is dominated by agricultural products from other provinces, totalling 16.6 billion m3/year, whilst its virtual water export is dominated by manufacturing sectors to other countries, totalling 11.7 billion m3/year. Both virtual water import and real water transfer from South to North Water Diversion Project are important water supplements for the region. The results of this study provide useful scientific references for the establishment of combating strategies to deal with the water scarcity in the future.

  7. Putting 3D modelling and 3D printing into practice: virtual surgery and preoperative planning to reconstruct complex post-traumatic skeletal deformities and defects.

    Science.gov (United States)

    Tetsworth, Kevin; Block, Steve; Glatt, Vaida

    2017-01-01

    3D printing technology has revolutionized and gradually transformed manufacturing across a broad spectrum of industries, including healthcare. Nowhere is this more apparent than in orthopaedics with many surgeons already incorporating aspects of 3D modelling and virtual procedures into their routine clinical practice. As a more extreme application, patient-specific 3D printed titanium truss cages represent a novel approach for managing the challenge of segmental bone defects. This review illustrates the potential indications of this innovative technique using 3D printed titanium truss cages in conjunction with the Masquelet technique. These implants are custom designed during a virtual surgical planning session with the combined input of an orthopaedic surgeon, an orthopaedic engineering professional and a biomedical design engineer. The ability to 3D model an identical replica of the original intact bone in a virtual procedure is of vital importance when attempting to precisely reconstruct normal anatomy during the actual procedure. Additionally, other important factors must be considered during the planning procedure, such as the three-dimensional configuration of the implant. Meticulous design is necessary to allow for successful implantation through the planned surgical exposure, while being aware of the constraints imposed by local anatomy and prior implants. This review will attempt to synthesize the current state of the art as well as discuss our personal experience using this promising technique. It will address implant design considerations including the mechanical, anatomical and functional aspects unique to each case. © The Authors, published by EDP Sciences, 2017.

  8. Flatbed-type 3D display systems using integral imaging method

    Science.gov (United States)

    Hirayama, Yuzo; Nagatani, Hiroyuki; Saishu, Tatsuo; Fukushima, Rieko; Taira, Kazuki

    2006-10-01

    We have developed prototypes of flatbed-type autostereoscopic display systems using one-dimensional integral imaging method. The integral imaging system reproduces light beams similar of those produced by a real object. Our display architecture is suitable for flatbed configurations because it has a large margin for viewing distance and angle and has continuous motion parallax. We have applied our technology to 15.4-inch displays. We realized horizontal resolution of 480 with 12 parallaxes due to adoption of mosaic pixel arrangement of the display panel. It allows viewers to see high quality autostereoscopic images. Viewing the display from angle allows the viewer to experience 3-D images that stand out several centimeters from the surface of the display. Mixed reality of virtual 3-D objects and real objects are also realized on a flatbed display. In seeking reproduction of natural 3-D images on the flatbed display, we developed proprietary software. The fast playback of the CG movie contents and real-time interaction are realized with the aid of a graphics card. Realization of the safety 3-D images to the human beings is very important. Therefore, we have measured the effects on the visual function and evaluated the biological effects. For example, the accommodation and convergence were measured at the same time. The various biological effects are also measured before and after the task of watching 3-D images. We have found that our displays show better results than those to a conventional stereoscopic display. The new technology opens up new areas of application for 3-D displays, including arcade games, e-learning, simulations of buildings and landscapes, and even 3-D menus in restaurants.

  9. Radiation dose assessment in nuclear plants through virtual simulations using a game engine

    International Nuclear Information System (INIS)

    Jorge, Carlos A.F.; Mol, Antonio C. A.; Aghina, Mauricio Alves C.

    2008-01-01

    Full text: This paper reports an R and D which has the purpose of performing dose assessment of workers in nuclear plants, through virtual simulations using a game engine. The main objective of this R and D is to support the planning of operational and maintenance routines in nuclear plants, aiming to reduce the dose received by workers. Game engine is the core of a computer game, that is usually made independent of both the scenarios and the original applications, and thus can be adapted for any other purposes, including scientific or technological ones. Computer games have experienced a great development in the last years, regarding computer graphics, 3D image rendering and the representation of the physics needed for the virtual simulations, such as gravity effect and collision among virtual components within the games. Thus, researchers do not need to develop an entire platform for virtual simulations, what would be a hard work itself, but they can rather take advantage of such well developed platforms, adapting them for their own applications. The game engine used in this R and D is part of a computer game widely used, Unreal, that has its source code partially open, and can be pursued for low cost. A nuclear plant in our Institution, Argonauta research reactor, has been virtually modeled in 3D, and trainees can navigate virtually through it, with realistic walking velocity, and experiencing collision. The modified game engine computes and displays in real-time the dose received by a virtual person, the avatar, as it walks through the plant, from the radiation dose rate distribution assigned to the virtual environment. In the beginning of this R and D, radiation dose rate measurements were previously collected by the radiological protection service, and input off-line to the game engine. Currently, on-line measurements can be also input to it, by taking advantage of the game's networking capabilities. A real radiation monitor has been used to collect real-time

  10. IMAGE-BASED VIRTUAL TOURS AND 3D MODELING OF PAST AND CURRENT AGES FOR THE ENHANCEMENT OF ARCHAEOLOGICAL PARKS: THE VISUALVERSILIA 3D PROJECT

    Directory of Open Access Journals (Sweden)

    C. Castagnetti

    2017-05-01

    Full Text Available The research project VisualVersilia 3D aims at offering a new way to promote the territory and its heritage by matching the traditional reading of the document and the potential use of modern communication technologies for the cultural tourism. Recently, the research on the use of new technologies applied to cultural heritage have turned their attention mainly to technologies to reconstruct and narrate the complexity of the territory and its heritage, including 3D scanning, 3D printing and augmented reality. Some museums and archaeological sites already exploit the potential of digital tools to preserve and spread their heritage but interactive services involving tourists in an immersive and more modern experience are still rare. The innovation of the project consists in the development of a methodology for documenting current and past historical ages and integrating their 3D visualizations with rendering capable of returning an immersive virtual reality for a successful enhancement of the heritage. The project implements the methodology in the archaeological complex of Massaciuccoli, one of the best preserved roman site of the Versilia Area (Tuscany, Italy. The activities of the project briefly consist in developing: 1. the virtual tour of the site in its current configuration on the basis of spherical images then enhanced by texts, graphics and audio guides in order to enable both an immersive and remote tourist experience; 2. 3D reconstruction of the evidences and buildings in their current condition for documentation and conservation purposes on the basis of a complete metric survey carried out through laser scanning; 3. 3D virtual reconstructions through the main historical periods on the basis of historical investigation and the analysis of data acquired.

  11. 3D-e-Chem-VM: Structural Cheminformatics Research Infrastructure in a Freely Available Virtual Machine.

    Science.gov (United States)

    McGuire, Ross; Verhoeven, Stefan; Vass, Márton; Vriend, Gerrit; de Esch, Iwan J P; Lusher, Scott J; Leurs, Rob; Ridder, Lars; Kooistra, Albert J; Ritschel, Tina; de Graaf, Chris

    2017-02-27

    3D-e-Chem-VM is an open source, freely available Virtual Machine ( http://3d-e-chem.github.io/3D-e-Chem-VM/ ) that integrates cheminformatics and bioinformatics tools for the analysis of protein-ligand interaction data. 3D-e-Chem-VM consists of software libraries, and database and workflow tools that can analyze and combine small molecule and protein structural information in a graphical programming environment. New chemical and biological data analytics tools and workflows have been developed for the efficient exploitation of structural and pharmacological protein-ligand interaction data from proteomewide databases (e.g., ChEMBLdb and PDB), as well as customized information systems focused on, e.g., G protein-coupled receptors (GPCRdb) and protein kinases (KLIFS). The integrated structural cheminformatics research infrastructure compiled in the 3D-e-Chem-VM enables the design of new approaches in virtual ligand screening (Chemdb4VS), ligand-based metabolism prediction (SyGMa), and structure-based protein binding site comparison and bioisosteric replacement for ligand design (KRIPOdb).

  12. Integrating a virtual agent into the real world

    OpenAIRE

    André, Elisabeth

    2007-01-01

    Integrating a virtual agent into the real world : the virtual anatomy assistant ritchie / K. Dorfmüller-Ulhaas ... - In: Intelligent virtual agents : 7th international conference, IVA 2007, Paris, France, September 17-19, 2007 ; proceedings / Catherine Pelachaud ... (eds.). - Berlin [u.a.] : Springer, 2007. - S. 211-224. - (Lecture notes in computer science ; 4722 : Lecture notes in artificial intelligence)

  13. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 (United States); Cheung, Yam [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas, 75390 and Department of Radiation Oncology, University of Maryland, College Park, Maryland 20742 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)

    2016-05-15

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced

  14. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system.

    Science.gov (United States)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-05-01

    To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have

  15. A robust real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system

    International Nuclear Information System (INIS)

    Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan

    2016-01-01

    Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced

  16. Pictorial communication in virtual and real environments

    Science.gov (United States)

    Ellis, Stephen R. (Editor)

    1991-01-01

    Papers about the communication between human users and machines in real and synthetic environments are presented. Individual topics addressed include: pictorial communication, distortions in memory for visual displays, cartography and map displays, efficiency of graphical perception, volumetric visualization of 3D data, spatial displays to increase pilot situational awareness, teleoperation of land vehicles, computer graphics system for visualizing spacecraft in orbit, visual display aid for orbital maneuvering, multiaxis control in telemanipulation and vehicle guidance, visual enhancements in pick-and-place tasks, target axis effects under transformed visual-motor mappings, adapting to variable prismatic displacement. Also discussed are: spatial vision within egocentric and exocentric frames of reference, sensory conflict in motion sickness, interactions of form and orientation, perception of geometrical structure from congruence, prediction of three-dimensionality across continuous surfaces, effects of viewpoint in the virtual space of pictures, visual slant underestimation, spatial constraints of stereopsis in video displays, stereoscopic stance perception, paradoxical monocular stereopsis and perspective vergence. (No individual items are abstracted in this volume)

  17. Dental impressions using 3D digital scanners: virtual becomes reality.

    Science.gov (United States)

    Birnbaum, Nathan S; Aaronson, Heidi B

    2008-10-01

    The technologies that have made the use of three-dimensional (3D) digital scanners an integral part of many industries for decades have been improved and refined for application to dentistry. Since the introduction of the first dental impressioning digital scanner in the 1980s, development engineers at a number of companies have enhanced the technologies and created in-office scanners that are increasingly user-friendly and able to produce precisely fitting dental restorations. These systems are capable of capturing 3D virtual images of tooth preparations, from which restorations may be fabricated directly (ie, CAD/CAM systems) or fabricated indirectly (ie, dedicated impression scanning systems for the creation of accurate master models). The use of these products is increasing rapidly around the world and presents a paradigm shift in the way in which dental impressions are made. Several of the leading 3D dental digital scanning systems are presented and discussed in this article.

  18. Evaluation of a low-cost 3D sound system for immersive virtual reality training systems.

    Science.gov (United States)

    Doerr, Kai-Uwe; Rademacher, Holger; Huesgen, Silke; Kubbat, Wolfgang

    2007-01-01

    Since Head Mounted Displays (HMD), datagloves, tracking systems, and powerful computer graphics resources are nowadays in an affordable price range, the usage of PC-based "Virtual Training Systems" becomes very attractive. However, due to the limited field of view of HMD devices, additional modalities have to be provided to benefit from 3D environments. A 3D sound simulation can improve the capabilities of VR systems dramatically. Unfortunately, realistic 3D sound simulations are expensive and demand a tremendous amount of computational power to calculate reverberation, occlusion, and obstruction effects. To use 3D sound in a PC-based training system as a way to direct and guide trainees to observe specific events in 3D space, a cheaper alternative has to be provided, so that a broader range of applications can take advantage of this modality. To address this issue, we focus in this paper on the evaluation of a low-cost 3D sound simulation that is capable of providing traceable 3D sound events. We describe our experimental system setup using conventional stereo headsets in combination with a tracked HMD device and present our results with regard to precision, speed, and used signal types for localizing simulated sound events in a virtual training environment.

  19. 3D wide field-of-view Gabor-domain optical coherence microscopy advancing real-time in-vivo imaging and metrology

    Science.gov (United States)

    Canavesi, Cristina; Cogliati, Andrea; Hayes, Adam; Tankam, Patrice; Santhanam, Anand; Rolland, Jannick P.

    2017-02-01

    Real-time volumetric high-definition wide-field-of-view in-vivo cellular imaging requires micron-scale resolution in 3D. Compactness of the handheld device and distortion-free images with cellular resolution are also critically required for onsite use in clinical applications. By integrating a custom liquid lens-based microscope and a dual-axis MEMS scanner in a compact handheld probe, Gabor-domain optical coherence microscopy (GD-OCM) breaks the lateral resolution limit of optical coherence tomography through depth, overcoming the tradeoff between numerical aperture and depth of focus, enabling advances in biotechnology. Furthermore, distortion-free imaging with no post-processing is achieved with a compact, lightweight handheld MEMS scanner that obtained a 12-fold reduction in volume and 17-fold reduction in weight over a previous dual-mirror galvanometer-based scanner. Approaching the holy grail of medical imaging - noninvasive real-time imaging with histologic resolution - GD-OCM demonstrates invariant resolution of 2 μm throughout a volume of 1 x 1 x 0.6 mm3, acquired and visualized in less than 2 minutes with parallel processing on graphics processing units. Results on the metrology of manufactured materials and imaging of human tissue with GD-OCM are presented.

  20. 3D-Pathology: a real-time system for quantitative diagnostic pathology and visualisation in 3D

    Science.gov (United States)

    Gottrup, Christian; Beckett, Mark G.; Hager, Henrik; Locht, Peter

    2005-02-01

    This paper presents the results of the 3D-Pathology project conducted under the European EC Framework 5. The aim of the project was, through the application of 3D image reconstruction and visualization techniques, to improve the diagnostic and prognostic capabilities of medical personnel when analyzing pathological specimens using transmitted light microscopy. A fully automated, computer-controlled microscope system has been developed to capture 3D images of specimen content. 3D image reconstruction algorithms have been implemented and applied to the acquired volume data in order to facilitate the subsequent 3D visualization of the specimen. Three potential application fields, immunohistology, cromogenic in situ hybridization (CISH) and cytology, have been tested using the prototype system. For both immunohistology and CISH, use of the system furnished significant additional information to the pathologist.

  1. Grasping trajectories in a virtual environment adhere to Weber's law.

    Science.gov (United States)

    Ozana, Aviad; Berman, Sigal; Ganel, Tzvi

    2018-06-01

    Virtual-reality and telerobotic devices simulate local motor control of virtual objects within computerized environments. Here, we explored grasping kinematics within a virtual environment and tested whether, as in normal 3D grasping, trajectories in the virtual environment are performed analytically, violating Weber's law with respect to object's size. Participants were asked to grasp a series of 2D objects using a haptic system, which projected their movements to a virtual space presented on a computer screen. The apparatus also provided object-specific haptic information upon "touching" the edges of the virtual targets. The results showed that grasping movements performed within the virtual environment did not produce the typical analytical trajectory pattern obtained during 3D grasping. Unlike as in 3D grasping, grasping trajectories in the virtual environment adhered to Weber's law, which indicates relative resolution in size processing. In addition, the trajectory patterns differed from typical trajectories obtained during 3D grasping, with longer times to complete the movement, and with maximum grip apertures appearing relatively early in the movement. The results suggest that grasping movements within a virtual environment could differ from those performed in real space, and are subjected to irrelevant effects of perceptual information. Such atypical pattern of visuomotor control may be mediated by the lack of complete transparency between the interface and the virtual environment in terms of the provided visual and haptic feedback. Possible implications of the findings to movement control within robotic and virtual environments are further discussed.

  2. The Arnolfini Portrait in 3d: Creating Virtual World of a Painting with Inconsistent Perspective

    NARCIS (Netherlands)

    Jansen, P.H.; Ruttkay, Z.M.; Arnold, D. B.; Ferko, A.

    We report on creating a 3d virtual reconstruction of the scene shown in "The Arnolfini Portrait" by Jan van Eyck. This early Renaissance painting, if painted faithfully, should confirm to one-point perspective, however it has several vanishing points instead of one. Hence our 3d reconstruction had

  3. Precise and real-time measurement of 3D tumor motion in lung due to breathing and heartbeat, measured during radiotherapy

    International Nuclear Information System (INIS)

    Seppenwoolde, Yvette; Shirato, Hiroki; Kitamura, Kei; Shimizu, Shinichi; Herk, Marcel van; Lebesque, Joos V.; Miyasaka, Kazuo

    2002-01-01

    Purpose: In this work, three-dimensional (3D) motion of lung tumors during radiotherapy in real time was investigated. Understanding the behavior of tumor motion in lung tissue to model tumor movement is necessary for accurate (gated or breath-hold) radiotherapy or CT scanning. Methods: Twenty patients were included in this study. Before treatment, a 2-mm gold marker was implanted in or near the tumor. A real-time tumor tracking system using two fluoroscopy image processor units was installed in the treatment room. The 3D position of the implanted gold marker was determined by using real-time pattern recognition and a calibrated projection geometry. The linear accelerator was triggered to irradiate the tumor only when the gold marker was located within a certain volume. The system provided the coordinates of the gold marker during beam-on and beam-off time in all directions simultaneously, at a sample rate of 30 images per second. The recorded tumor motion was analyzed in terms of the amplitude and curvature of the tumor motion in three directions, the differences in breathing level during treatment, hysteresis (the difference between the inhalation and exhalation trajectory of the tumor), and the amplitude of tumor motion induced by cardiac motion. Results: The average amplitude of the tumor motion was greatest (12±2 mm [SD]) in the cranial-caudal direction for tumors situated in the lower lobes and not attached to rigid structures such as the chest wall or vertebrae. For the lateral and anterior-posterior directions, tumor motion was small both for upper- and lower-lobe tumors (2±1 mm). The time-averaged tumor position was closer to the exhale position, because the tumor spent more time in the exhalation than in the inhalation phase. The tumor motion was modeled as a sinusoidal movement with varying asymmetry. The tumor position in the exhale phase was more stable than the tumor position in the inhale phase during individual treatment fields. However, in many

  4. Robotic 4D ultrasound solution for real-time visualization and teleoperation

    Directory of Open Access Journals (Sweden)

    Al-Badri Mohammed

    2017-09-01

    Full Text Available Automation of the image acquisition process via robotic solutions offer a large leap towards resolving ultrasound’s user-dependency. This paper, as part of a larger project aimed to develop a multipurpose 4d-ultrasonic force-sensitive robot for medical applications, focuses on achieving real-time remote visualisation for 4d ultrasound image transfer. This was possible through implementing our software modification on a GE Vivid 7 Dimension workstation, which operates a matrix array probe controlled by a KUKA LBR iiwa 7 7-DOF robotic arm. With the help of robotic positioning and the matrix array probe, fast volumetric imaging of target regions was feasible. By testing ultrasound volumes, which were roughly 880 kB in size, while using gigabit Ethernet connection, a latency of ∼57 ms was achievable for volume transfer between the ultrasound station and a remote client application, which as a result allows a frame count of 17.4 fps. Our modification thus offers for the first time real-time remote visualization, recording and control of 4d ultrasound data, which can be implemented in teleoperation.

  5. [Accuracy of morphological simulation for orthognatic surgery. Assessment of a 3D image fusion software.

    Science.gov (United States)

    Terzic, A; Schouman, T; Scolozzi, P

    2013-08-06

    The CT/CBCT data allows for 3D reconstruction of skeletal and untextured soft tissue volume. 3D stereophotogrammetry technology has strongly improved the quality of facial soft tissue surface texture. The combination of these two technologies allows for an accurate and complete reconstruction. The 3D virtual head may be used for orthognatic surgical planning, virtual surgery, and morphological simulation obtained with a software dedicated to the fusion of 3D photogrammetric and radiological images. The imaging material include: a multi-slice CT scan or broad field CBCT scan, a 3D photogrammetric camera. The operative image processing protocol includes the following steps: 1) pre- and postoperative CT/CBCT scan and 3D photogrammetric image acquisition; 2) 3D image segmentation and fusion of untextured CT/CBCT skin with the preoperative textured facial soft tissue surface of the 3D photogrammetric scan; 3) image fusion of the pre- and postoperative CT/CBCT data set virtual osteotomies, and 3D photogrammetric soft tissue virtual simulation; 4) fusion of virtual simulated 3D photogrammetric and real postoperative images, and assessment of accuracy using a color-coded scale to measure the differences between the two surfaces. Copyright © 2013. Published by Elsevier Masson SAS.

  6. Constructivist Learning Environment During Virtual and Real Laboratory Activities

    Directory of Open Access Journals (Sweden)

    Ari Widodo

    2017-04-01

    Full Text Available Laboratory activities and constructivism are two notions that have been playing significant roles in science education. Despite common beliefs about the importance of laboratory activities, reviews reported inconsistent results about the effectiveness of laboratory activities. Since laboratory activities can be expensive and take more time, there is an effort to introduce virtual laboratory activities. This study aims at exploring the learning environment created by a virtual laboratory and a real laboratory. A quasi experimental study was conducted at two grade ten classes at a state high school in Bandung, Indonesia. Data were collected using a questionnaire called Constructivist Learning Environment Survey (CLES before and after the laboratory activities. The results show that both types of laboratories can create constructivist learning environments. Each type of laboratory activity, however, may be stronger in improving certain aspects compared to the other. While a virtual laboratory is stronger in improving critical voice and personal relevance, real laboratory activities promote aspects of personal relevance, uncertainty and student negotiation. This study suggests that instead of setting one type of laboratory against the other, lessons and follow up studies should focus on how to combine both types of laboratories to support better learning.

  7. Radiofrequency Ablation Assisted by Real-Time Virtual Sonography and CT for Hepatocellular Carcinoma Undetectable by Conventional Sonography

    International Nuclear Information System (INIS)

    Nakai, Motoki; Sato, Morio; Sahara, Shinya; Takasaka, Isao; Kawai, Nobuyuki; Minamiguchi, Hiroki; Tanihata, Hirohiko; Kimura, Masashi; Takeuchi, Nozomu

    2009-01-01

    Real-time virtual sonography (RVS) is a diagnostic imaging support system, which provides the same cross-sectional multiplanar reconstruction images as ultrasound images on the same monitor screen in real time. The purpose of this study was to evaluate radiofrequency ablation (RFA) assisted by RVS and CT for hepatocellular carcinoma (HCC) undetectable with conventional sonography. Subjects were 20 patients with 20 HCC nodules not detected by conventional sonography but detectable by CT or MRI. All patients had hepatitis C-induced liver cirrhosis; there were 13 males and 7 females aged 55-81 years (mean, 69.3 years). RFA was performed in the CT room, and the tumor was punctured with the assistance of RVS. CT was performed immediately after puncture, and ablation was performed after confirming that the needle had been inserted into the tumor precisely. The mean number of punctures and success rates of the first puncture were evaluated. Treatment effects were evaluated with dynamic CT every 3 months after RFA. RFA was technically feasible and local tumor control was achieved in all patients. The mean number of punctures was 1.1, and the success rate of the first puncture was 90.0%. This method enabled safe ablation without complications. The mean follow-up period was 13.5 month (range, 9-18 months). No local recurrence was observed at the follow-up points. In conclusion, RFA assisted by RVS and CT is a safe and efficacious method of treatment for HCC undetectable by conventional sonography.

  8. 3D multiplayer virtual pets game using Google Card Board

    Science.gov (United States)

    Herumurti, Darlis; Riskahadi, Dimas; Kuswardayan, Imam

    2017-08-01

    Virtual Reality (VR) is a technology which allows user to interact with the virtual environment. This virtual environment is generated and simulated by computer. This technology can make user feel the sensation when they are in the virtual environment. The VR technology provides real virtual environment view for user and it is not viewed from screen. But it needs another additional device to show the view of virtual environment. This device is known as Head Mounted Device (HMD). Oculust Rift and Microsoft Hololens are the most famous HMD devices used in VR. And in 2014, Google Card Board was introduced at Google I/O developers conference. Google Card Board is VR platform which allows user to enjoy the VR with simple and cheap way. In this research, we explore Google Card Board to develop simulation game of raising pet. The Google Card Board is used to create view for the VR environment. The view and control in VR environment is built using Unity game engine. And the simulation process is designed using Finite State Machine (FSM). This FSM can help to design the process clearly. So the simulation process can describe the simulation of raising pet well. Raising pet is fun activity. But sometimes, there are many conditions which cause raising pet become difficult to do, i.e. environment condition, disease, high cost, etc. this research aims to explore and implement Google Card Board in simulation of raising pet.

  9. Real-time virtual sonography for navigation during targeted prostate biopsy using magnetic resonance imaging data

    International Nuclear Information System (INIS)

    Miyagawa, Tomoaki; Ishikawa, Satoru; Kimura, Tomokazu; Suetomi, Takahiro; Tsutsumi, Masakazu; Irie, Toshiyuki; Kondoh, Masanao; Mitake, Tsuyoshi

    2010-01-01

    The objective of this study was to evaluate the effectiveness of the medical navigation technique, namely, Real-time Virtual Sonography (RVS), for targeted prostate biopsy. Eighty-five patients with suspected prostate cancer lesions using magnetic resonance imaging (MRI) were included in this study. All selected patients had at least one negative result on the previous transrectal biopsies. The acquired MRI volume data were loaded onto a personal computer installed with RVS software, which registers the volumes between MRI and real-time ultrasound data for real-time display. The registered MRI images were displayed adjacent to the ultrasonographic sagittal image on the same computer monitor. The suspected lesions on T2-weighted images were marked with a red circle. At first suspected lesions were biopsied transperineally under real-time navigation with RVS and then followed by the conventional transrectal and transperineal biopsy under spinal anesthesia. The median age of the patients was 69 years (56-84 years), and the prostate-specific antigen level and prostate volume were 9.9 ng/mL (4.0-34.2) and 37.2 mL (18-141), respectively. Prostate cancer was detected in 52 patients (61%). The biopsy specimens obtained using RVS revealed 45/52 patients (87%) positive for prostate cancer. A total of 192 biopsy cores were obtained using RVS. Sixty-two of these (32%) were positive for prostate cancer, whereas conventional random biopsy revealed cancer only in 75/833 (9%) cores (P<0.01). Targeted prostate biopsy with RVS is very effective to diagnose lesions detected with MRI. This technique only requires additional computer and RVS software and thus is cost-effective. Therefore, RVS-guided prostate biopsy has great potential for better management of prostate cancer patients. (author)

  10. Intermediate view reconstruction using adaptive disparity search algorithm for real-time 3D processing

    Science.gov (United States)

    Bae, Kyung-hoon; Park, Changhan; Kim, Eun-soo

    2008-03-01

    In this paper, intermediate view reconstruction (IVR) using adaptive disparity search algorithm (ASDA) is for realtime 3-dimensional (3D) processing proposed. The proposed algorithm can reduce processing time of disparity estimation by selecting adaptive disparity search range. Also, the proposed algorithm can increase the quality of the 3D imaging. That is, by adaptively predicting the mutual correlation between stereo images pair using the proposed algorithm, the bandwidth of stereo input images pair can be compressed to the level of a conventional 2D image and a predicted image also can be effectively reconstructed using a reference image and disparity vectors. From some experiments, stereo sequences of 'Pot Plant' and 'IVO', it is shown that the proposed algorithm improves the PSNRs of a reconstructed image to about 4.8 dB by comparing with that of conventional algorithms, and reduces the Synthesizing time of a reconstructed image to about 7.02 sec by comparing with that of conventional algorithms.

  11. Distribution Locational Real-Time Pricing Based Smart Building Control and Management

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Jun; Dai, Xiaoxiao; Zhang, Yingchen; Zhang, Jun; Gao, Wenzhong

    2016-11-21

    This paper proposes an real-virtual parallel computing scheme for smart building operations aiming at augmenting overall social welfare. The University of Denver's campus power grid and Ritchie fitness center is used for demonstrating the proposed approach. An artificial virtual system is built in parallel to the real physical system to evaluate the overall social cost of the building operation based on the social science based working productivity model, numerical experiment based building energy consumption model and the power system based real-time pricing mechanism. Through interactive feedback exchanged between the real and virtual system, enlarged social welfare, including monetary cost reduction and energy saving, as well as working productivity improvements, can be achieved.

  12. Real-time 3-D SAFT-UT system evaluation and validation

    International Nuclear Information System (INIS)

    Doctor, S.R.; Schuster, G.J.; Reid, L.D.; Hall, T.E.

    1996-09-01

    SAFT-UT technology is shown to provide significant enhancements to the inspection of materials used in US nuclear power plants. This report provides guidelines for the implementation of SAFT-UT technology and shows the results from its application. An overview of the development of SAFT-UT is provided so that the reader may become familiar with the technology. Then the basic fundamentals are presented with an extensive list of references. A comprehensive operating procedure, which is used in conjunction with the SAFT-UT field system developed by Pacific Northwest Laboratory (PNL), provides the recipe for both SAFT data acquisition and analysis. The specification for the hardware implementation is provided for the SAFT-UT system along with a description of the subsequent developments and improvements. One development of technical interest is the SAFT real time processor. Performance of the real-time processor is impressive and comparison is made of this dedicated parallel processor to a conventional computer and to the newer high-speed computer architectures designed for image processing. Descriptions of other improvements, including a robotic scanner, are provided. Laboratory parametric and application studies, performed by PNL and not previously reported, are discussed followed by a section on field application work in which SAFT was used during inservice inspections of operating reactors

  13. Real-time 3-D SAFT-UT system evaluation and validation

    Energy Technology Data Exchange (ETDEWEB)

    Doctor, S.R.; Schuster, G.J.; Reid, L.D.; Hall, T.E. [Pacific Northwest National Lab., Richland, WA (United States)

    1996-09-01

    SAFT-UT technology is shown to provide significant enhancements to the inspection of materials used in US nuclear power plants. This report provides guidelines for the implementation of SAFT-UT technology and shows the results from its application. An overview of the development of SAFT-UT is provided so that the reader may become familiar with the technology. Then the basic fundamentals are presented with an extensive list of references. A comprehensive operating procedure, which is used in conjunction with the SAFT-UT field system developed by Pacific Northwest Laboratory (PNL), provides the recipe for both SAFT data acquisition and analysis. The specification for the hardware implementation is provided for the SAFT-UT system along with a description of the subsequent developments and improvements. One development of technical interest is the SAFT real time processor. Performance of the real-time processor is impressive and comparison is made of this dedicated parallel processor to a conventional computer and to the newer high-speed computer architectures designed for image processing. Descriptions of other improvements, including a robotic scanner, are provided. Laboratory parametric and application studies, performed by PNL and not previously reported, are discussed followed by a section on field application work in which SAFT was used during inservice inspections of operating reactors.

  14. Bridging Real and Virtual: A Spiritual Challenge

    Directory of Open Access Journals (Sweden)

    Heim, Michael R.

    2017-05-01

    Full Text Available The question of how to bridge virtuality and reality intensified in 2016 with the release of several consumer products. The article begins by reviewing two anxieties about virtual reality raised at a 1999 conference. To address these anxieties, the paper draws on post-Jungian archetypal psychology (James Hillman, Thomas Moore and the retrieval of Renaissance theology (Marsilio Ficino. Two experiences with Samsung Gear VR then illustrate how classic archetypal elements can contribute to active procedures for bridging the virtual and the real.

  15. Real Time Revisited

    Science.gov (United States)

    Allen, Phillip G.

    1985-12-01

    The call for abolishing photo reconnaissance in favor of real time is once more being heard. Ten years ago the same cries were being heard with the introduction of the Charge Coupled Device (CCD). The real time system problems that existed then and stopped real time proliferation have not been solved. The lack of an organized program by either DoD or industry has hampered any efforts to solve the problems, and as such, very little has happened in real time in the last ten years. Real time is not a replacement for photo, just as photo is not a replacement for infra-red or radar. Operational real time sensors can be designed only after their role has been defined and improvements made to the weak links in the system. Plodding ahead on a real time reconnaissance suite without benefit of evaluation of utility will allow this same paper to be used ten years from now.

  16. SLStudio: Open-source framework for real-time structured light

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    that this software makes real-time 3D scene capture more widely accessible and serves as a foundation for new structured light scanners operating in real-time, e.g. 20 depth images per second and more. The use cases for such scanners are plentyfull, however due to the computational constraints, all public......An open-source framework for real-time structured light is presented. It is called “SLStudio”, and enables real-time capture of metric depth images. The framework is modular, and extensible to support new algorithms for scene encoding/decoding, triangulation, and aquisition hardware. It is the aim...... implementations so far are limited to offline processing. With “SLStudio”, we are making a platform available which enables researchers from many different fields to build application specific real time 3D scanners. The software is hosted at http://compute.dtu.dk/~jakw/slstudio....

  17. Induced tauopathy in a novel 3D-culture model mediates neurodegenerative processes: a real-time study on biochips.

    Directory of Open Access Journals (Sweden)

    Diana Seidel

    Full Text Available Tauopathies including Alzheimer's disease represent one of the major health problems of aging population worldwide. Therefore, a better understanding of tau-dependent pathologies and consequently, tau-related intervention strategies is highly demanded. In recent years, several tau-focused therapies have been proposed with the aim to stop disease progression. However, to develop efficient active pharmaceutical ingredients for the broad treatment of Alzheimer's disease patients, further improvements are necessary for understanding the detailed neurodegenerative processes as well as the mechanism and side effects of potential active pharmaceutical ingredients (API in the neuronal system. In this context, there is a lack of suitable complex in vitro cell culture models recapitulating major aspects of taupathological degenerative processes in sufficient time and reproducible manner.Herewith, we describe a novel 3D SH-SY5Y cell-based, tauopathy model that shows advanced characteristics of matured neurons in comparison to monolayer cultures without the need of artificial differentiation promoting agents. Moreover, the recombinant expression of a novel highly pathologic fourfold mutated human tau variant lead to a fast and emphasized degeneration of neuritic processes. The neurodegenerative effects could be analyzed in real time and with high sensitivity using our unique microcavity array-based impedance spectroscopy measurement system. We were able to quantify a time- and concentration-dependent relative impedance decrease when Alzheimer's disease-like tau pathology was induced in the neuronal 3D cell culture model. In combination with the collected optical information, the degenerative processes within each 3D-culture could be monitored and analyzed. More strikingly, tau-specific regenerative effects caused by tau-focused active pharmaceutical ingredients could be quantitatively monitored by impedance spectroscopy.Bringing together our novel complex 3

  18. Real-time monitoring of sucrose, sorbitol, d-glucose and d-fructose concentration by electromagnetic sensing.

    Science.gov (United States)

    Harnsoongnoen, Supakorn; Wanthong, Anuwat

    2017-10-01

    Magnetic sensing at microwave frequencies for real-time monitoring of sucrose, sorbitol, d-glucose and d-fructose concentrations is reported. The sensing element was designed based on a coplanar waveguide (CPW) loaded with a split ring resonator (SRR), which was fabricated on a DiClad 880 substrate with a thickness of 1.6mm and relative permittivity (ε r ) of 2.2. The magnetic sensor was connected to a Vector Network Analyzer (VNA) and the electromagnetic interaction between the samples and sensor was analyzed. The magnitude of the transmission coefficient (S 21 ) was used as an indicator to detect the solution sample concentrations ranging from 0.04 to 0.20g/ml. The experimental results confirmed that the developed system using microwaves for the real-time monitoring of sucrose, sorbitol, d-glucose and d-fructose concentrations gave unique results for each solution type and concentration. Moreover, the proposed sensor has a wide dynamic range, high linearity, fast operation and low-cost. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. A simplified 2D to 3D video conversion technology——taking virtual campus video production as an example

    Directory of Open Access Journals (Sweden)

    ZHUANG Huiyang

    2012-10-01

    Full Text Available This paper describes a simplified 2D to 3D Video Conversion Technology, taking virtual campus 3D video production as an example. First, it clarifies the meaning of the 2D to 3D Video Conversion Technology, and points out the disadvantages of traditional methods. Second, it forms an innovative and convenient method. A flow diagram, software and hardware configurations are presented. Finally, detailed description of the conversion steps and precautions are given in turn to the three processes, namely, preparing materials, modeling objects and baking landscapes, recording screen and converting videos .

  20. ''Augmented reality'' in conventional simulation by projection of 3-D structures into 2-D images. A comparison with virtual methods

    International Nuclear Information System (INIS)

    Deutschmann, H.; Nairz, O.; Zehentmayr, F.; Fastner, G.; Sedlmayer, F.; Steininger, P.; Kopp, P.; Merz, F.; Wurstbauer, K.; Kranzinger, M.; Kametriser, G.; Kopp, M.

    2008-01-01

    Background and purpose: in this study, a new method is introduced, which allows the overlay of three-dimensional structures, that have been delineated on transverse slices, onto the fluoroscopy from conventional simulators in real time. Patients and methods: setup deviations between volumetric imaging and simulation were visualized, measured and corrected for 701 patient isocenters. Results: comparing the accuracy to mere virtual simulation lacking additional X-ray imaging, a clear benefit of the new method could be shown. On average, virtual prostate simulations had to be corrected by 0.48 cm (standard deviation [SD] 0.38), and those of the breast by 0.67 cm (SD 0.66). Conclusion: the presented method provides an easy way to determine entity-specific safety margins related to patient setup errors upon registration of bony anatomy (prostate 0.9 cm for 90% of cases, breast 1.3 cm). The important role of planar X-ray imaging was clearly demonstrated. The innovation can also be applied to adaptive image-guided radiotherapy (IGRT) protocols. (orig.)

  1. Exploring 3-D Virtual Reality Technology for Spatial Ability and Chemistry Achievement

    Science.gov (United States)

    Merchant, Z.; Goetz, E. T.; Keeney-Kennicutt, W.; Cifuentes, L.; Kwok, O.; Davis, T. J.

    2013-01-01

    We investigated the potential of Second Life® (SL), a three-dimensional (3-D) virtual world, to enhance undergraduate students' learning of a vital chemistry concept. A quasi-experimental pre-posttest control group design was used to conduct the study. A total of 387 participants completed three assignment activities either in SL or using…

  2. Using virtual reality technology and hand tracking technology to create software for training surgical skills in 3D game

    Science.gov (United States)

    Zakirova, A. A.; Ganiev, B. A.; Mullin, R. I.

    2015-11-01

    The lack of visible and approachable ways of training surgical skills is one of the main problems in medical education. Existing simulation training devices are not designed to teach students, and are not available due to the high cost of the equipment. Using modern technologies such as virtual reality and hands movements fixation technology we want to create innovative method of learning the technics of conducting operations in 3D game format, which can make education process interesting and effective. Creating of 3D format virtual simulator will allow to solve several conceptual problems at once: opportunity of practical skills improvement unlimited by the time without the risk for patient, high realism of environment in operational and anatomic body structures, using of game mechanics for information perception relief and memorization of methods acceleration, accessibility of this program.

  3. Transfer of Skill from a Virtual Reality Trainer to Real Juggling

    Directory of Open Access Journals (Sweden)

    Gopher Daniel

    2011-12-01

    Full Text Available The purpose of this study was to evaluate transfer of training from a virtual reality environment that captures visual and temporal-spatial aspects of juggling, but not the motor demands of juggling. Transfer of skill to real juggling was examined by comparing juggling performance of novices that either experienced both the virtual training protocol and real juggling practice, or only practiced real juggling. After ten days of training, participants who have alternated between real and virtual training demonstrated comparable performance to those who only practiced real juggling. Moreover, they adapted better to instructed changes in temporal-spatial constraints. These results imply that juggling relevant skill subcomponents can be trained in the virtual environment, and support the notion that cognitive aspects of a skill can be separately trained to enhance the acquisition of a complex perceptual-motor task. This study was performed within the SKILLS integrated project of the EC 6th framework.

  4. 3D visualisation of the middle ear and adjacent structures using reconstructed multi-slice CT datasets, correlating 3D images and virtual endoscopy to the 2D cross-sectional images

    International Nuclear Information System (INIS)

    Rodt, T.; Ratiu, P.; Kacher, D.F.; Anderson, M.; Jolesz, F.A.; Kikinis, R.; Becker, H.; Bartling, S.

    2002-01-01

    The 3D imaging of the middle ear facilitates better understanding of the patient's anatomy. Cross-sectional slices, however, often allow a more accurate evaluation of anatomical structures, as some detail may be lost through post-processing. In order to demonstrate the advantages of combining both approaches, we performed computed tomography (CT) imaging in two normal and 15 different pathological cases, and the 3D models were correlated to the cross-sectional CT slices. Reconstructed CT datasets were acquired by multi-slice CT. Post-processing was performed using the in-house software ''3D Slicer'', applying thresholding and manual segmentation. 3D models of the individual anatomical structures were generated and displayed in different colours. The display of relevant anatomical and pathological structures was evaluated in the greyscale 2D slices, 3D images, and the 2D slices showing the segmented 2D anatomy in different colours for each structure. Correlating 2D slices to the 3D models and virtual endoscopy helps to combine the advantages of each method. As generating 3D models can be extremely time-consuming, this approach can be a clinically applicable way of gaining a 3D understanding of the patient's anatomy by using models as a reference. Furthermore, it can help radiologists and otolaryngologists evaluating the 2D slices by adding the correct 3D information that would otherwise have to be mentally integrated. The method can be applied to radiological diagnosis, surgical planning, and especially, to teaching. (orig.)

  5. The virtual lover: variable and easily guided 3D fish animations as an innovative tool in mate-choice experiments with sailfin mollies-I. Design and implementation.

    Science.gov (United States)

    Müller, Klaus; Smielik, Ievgen; Hütwohl, Jan-Marco; Gierszewski, Stefanie; Witte, Klaudia; Kuhnert, Klaus-Dieter

    2017-02-01

    Animal behavior researchers often face problems regarding standardization and reproducibility of their experiments. This has led to the partial substitution of live animals with artificial virtual stimuli. In addition to standardization and reproducibility, virtual stimuli open new options for researchers since they are easily changeable in morphology and appearance, and their behavior can be defined. In this article, a novel toolchain to conduct behavior experiments with fish is presented by a case study in sailfin mollies Poecilia latipinna . As the toolchain holds many different and novel features, it offers new possibilities for studies in behavioral animal research and promotes the standardization of experiments. The presented method includes options to design, animate, and present virtual stimuli to live fish. The designing tool offers an easy and user-friendly way to define size, coloration, and morphology of stimuli and moreover it is able to configure virtual stimuli randomly without any user influence. Furthermore, the toolchain brings a novel method to animate stimuli in a semiautomatic way with the help of a game controller. These created swimming paths can be applied to different stimuli in real time. A presentation tool combines models and swimming paths regarding formerly defined playlists, and presents the stimuli onto 2 screens. Experiments with live sailfin mollies validated the usage of the created virtual 3D fish models in mate-choice experiments.

  6. Realidad virtual y materialidad

    OpenAIRE

    Pérez Herranz, Fernando Miguel

    2009-01-01

    1. Fenomenología de partida: Real / Simbólico / Imaginario 2. Realidad 3. Virtual 3.1. Virtual / real / posible / probable 3.2. Los contextos de la realidad virtual A) REALIDAD VIRTUAL INMERSIVA B) REALIDAD VIRTUAL NO INMERSIVA C) REALIDAD VIRTUAL Y DIGITALIZACIÓN 3.3. Cruce virtual / real 3.4. Cuestiones filosóficas 4. Materialidad 5. Materialidad y descentramiento 5.1. Ejemplos de descentramiento en los contextos de Realidad Virtual A’) DUALISMO CARTESIANO, CUERPO Y «CIBORG » B’) EL ESPÍRIT...

  7. An Evolutionary Real-Time 3D Route Planner for Aircraft

    Institute of Scientific and Technical Information of China (English)

    郑昌文; 丁明跃; 周成平

    2003-01-01

    A novel evolutionary route planner for aircraft is proposed in this paper. In the new planner, individual candidates are evaluated with respect to the workspace, thus the computation of the configuration space is not required. By using problem-specific chromosome structure and genetic operators, the routes are generated in real time,with different mission constraints such as minimum route leg length and flying altitude, maximum turning angle, maximum climbing/diving angle and route distance constraint taken into account.

  8. Virtual surgical planning and 3D printing in prosthetic orbital reconstruction with percutaneous implants: a technical case report

    Directory of Open Access Journals (Sweden)

    Huang Y

    2016-10-01

    Full Text Available Yu-Hui Huang,1,2 Rosemary Seelaus,1,2 Linping Zhao,1,2 Pravin K Patel,1,2 Mimis Cohen1,2 1The Craniofacial Center, Department of Surgery, Division of Plastic & Reconstructive Surgery, University of Illinois Hospital & Health Sciences System, 2University of Illinois College of Medicine at Chicago, Chicago, IL, USA Abstract: Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis. Keywords: computer-assisted surgery, virtual surgical planning (VSP, 3D printing, orbital prosthetic reconstruction, craniofacial implants

  9. Computational hologram synthesis and representation on spatial light modulators for real-time 3D holographic imaging

    International Nuclear Information System (INIS)

    Reichelt, Stephan; Leister, Norbert

    2013-01-01

    In dynamic computer-generated holography that utilizes spatial light modulators, both hologram synthesis and hologram representation are essential in terms of fast computation and high reconstruction quality. For hologram synthesis, i.e. the computation step, Fresnel transform based or point-source based raytracing methods can be applied. In the encoding step, the complex wave-field has to be optimally represented by the SLM with its given modulation capability. For proper hologram reconstruction that implies a simultaneous and independent amplitude and phase modulation of the input wave-field by the SLM. In this paper, we discuss full complex hologram representation methods on SLMs by considering inherent SLM parameter such as modulation type and bit depth on their reconstruction performance such as diffraction efficiency and SNR. We review the three implementation schemes of Burckhardt amplitude-only representation, phase-only macro-pixel representation, and two-phase interference representation. Besides the optical performance we address their hardware complexity and required computational load. Finally, we experimentally demonstrate holographic reconstructions of different representation schemes as obtained by functional prototypes utilizing SeeReal's viewing-window holographic display technology. The proposed hardware implementations enable a fast encoding of complex-valued hologram data and thus will pave the way for commercial real-time holographic 3D imaging in the near future.

  10. CROSS DRIVE: A New Interactive and Immersive Approach for Exploring 3D Time-Dependent Mars Atmospheric Data in Distributed Teams

    Science.gov (United States)

    Gerndt, Andreas M.; Engelke, Wito; Giuranna, Marco; Vandaele, Ann C.; Neary, Lori; Aoki, Shohei; Kasaba, Yasumasa; Garcia, Arturo; Fernando, Terrence; Roberts, David; CROSS DRIVE Team

    2016-10-01

    Atmospheric phenomena of Mars can be highly dynamic and have daily and seasonal variations. Planetary-scale wavelike disturbances, for example, are frequently observed in Mars' polar winter atmosphere. Possible sources of the wave activity were suggested to be dynamical instabilities and quasi-stationary planetary waves, i.e. waves that arise predominantly via zonally asymmetric surface properties. For a comprehensive understanding of these phenomena, single layers of altitude have to be analyzed carefully and relations between different atmospheric quantities and interaction with the surface of Mars have to be considered. The CROSS DRIVE project tries to address the presentation of those data with a global view by means of virtual reality techniques. Complex orbiter data from spectrometer and observation data from Earth are combined with global circulation models and high-resolution terrain data and images available from Mars Express or MRO instruments. Scientists can interactively extract features from those dataset and can change visualization parameters in real-time in order to emphasize findings. Stereoscopic views allow for perception of the actual 3D behavior of Mars's atmosphere. A very important feature of the visualization system is the possibility to connect distributed workspaces together. This enables discussions between distributed working groups. The workspace can scale from virtual reality systems to expert desktop applications to web-based project portals. If multiple virtual environments are connected, the 3D position of each individual user is captured and used to depict the scientist as an avatar in the virtual world. The appearance of the avatar can also scale from simple annotations to complex avatars using tele-presence technology to reconstruct the users in 3D. Any change of the feature set (annotations, cutplanes, volume rendering, etc.) within the VR is immediately exchanged between all connected users. This allows that everybody is always

  11. Synthesized view comparison method for no-reference 3D image quality assessment

    Science.gov (United States)

    Luo, Fangzhou; Lin, Chaoyi; Gu, Xiaodong; Ma, Xiaojun

    2018-04-01

    We develop a no-reference image quality assessment metric to evaluate the quality of synthesized view rendered from the Multi-view Video plus Depth (MVD) format. Our metric is named Synthesized View Comparison (SVC), which is designed for real-time quality monitoring at the receiver side in a 3D-TV system. The metric utilizes the virtual views in the middle which are warped from left and right views by Depth-image-based rendering algorithm (DIBR), and compares the difference between the virtual views rendered from different cameras by Structural SIMilarity (SSIM), a popular 2D full-reference image quality assessment metric. The experimental results indicate that our no-reference quality assessment metric for the synthesized images has competitive prediction performance compared with some classic full-reference image quality assessment metrics.

  12. Design of virtual three-dimensional instruments for sound control

    Science.gov (United States)

    Mulder, Axel Gezienus Elith

    An environment for designing virtual instruments with 3D geometry has been prototyped and applied to real-time sound control and design. It enables a sound artist, musical performer or composer to design an instrument according to preferred or required gestural and musical constraints instead of constraints based only on physical laws as they apply to an instrument with a particular geometry. Sounds can be created, edited or performed in real-time by changing parameters like position, orientation and shape of a virtual 3D input device. The virtual instrument can only be perceived through a visualization and acoustic representation, or sonification, of the control surface. No haptic representation is available. This environment was implemented using CyberGloves, Polhemus sensors, an SGI Onyx and by extending a real- time, visual programming language called Max/FTS, which was originally designed for sound synthesis. The extension involves software objects that interface the sensors and software objects that compute human movement and virtual object features. Two pilot studies have been performed, involving virtual input devices with the behaviours of a rubber balloon and a rubber sheet for the control of sound spatialization and timbre parameters. Both manipulation and sonification methods affect the naturalness of the interaction. Informal evaluation showed that a sonification inspired by the physical world appears natural and effective. More research is required for a natural sonification of virtual input device features such as shape, taking into account possible co- articulation of these features. While both hands can be used for manipulation, left-hand-only interaction with a virtual instrument may be a useful replacement for and extension of the standard keyboard modulation wheel. More research is needed to identify and apply manipulation pragmatics and movement features, and to investigate how they are co-articulated, in the mapping of virtual object

  13. ARC+(Registered Trademark) and ARC PC Welding Simulators: Teach Welders with Virtual Interactive 3D Technologies

    Science.gov (United States)

    Choquet, Claude

    2011-01-01

    123 Certification Inc., a Montreal based company, has developed an innovative hands-on welding simulator solution to help build the welding workforce in the most simple way. The solution lies in virtual reality technology, which has been fully tested since the early 90's. President and founder of 123 Certification Inc., Mr. Claude Choquet Ing. Msc. IWE. acts as a bridge between the welding and the programming world. Working in these fields for more than 20 years. he has filed 12 patents world-wide for a gesture control platform with leading edge hardware related to simulation. In the summer of 2006. Mr Choquet was proud to be invited to the annual IIW International Weld ing Congress in Quebec City to launch the ARC+ welding simulator. A 100% virtual reality system and web based training center was developed to simulate multi process. multi-materiaL multi-position and multi pass welding. The simulator is intended to train welding students and apprentices in schools or industries. The welding simulator is composed of a real welding e[eetrode holder (SMAW-GTAW) and gun (GMAW-FCAW). a head mounted display (HMD), a 6 degrees of freedom tracking system for interaction between the user's hands and head. as well as external audio speakers. Both guns and HMD are interacting online and simultaneously. The welding simulation is based on the law of physics and empirical results from detailed analysis of a series of welding tests based on industrial applications tested over the last 20 years. The simulation runs in real-time, using a local logic network to determine the quality and shape of the created weld. These results are based on the orientation distance. and speed of the welding torch and depth of penetration. The welding process and resulting weld bc.1d are displayed in a virtual environment with screenplay interactive training modules. For review. weld quality and recorded process values can be displayed and diagnosed after welding. To help in the le.tming process, a

  14. Mitigating Space Weather Impacts on the Power Grid in Real-Time: Applying 3-D EarthScope Magnetotelluric Data to Forecasting Reactive Power Loss in Power Transformers

    Science.gov (United States)

    Schultz, A.; Bonner, L. R., IV

    2017-12-01

    Current efforts to assess risk to the power grid from geomagnetic disturbances (GMDs) that result in geomagnetically induced currents (GICs) seek to identify potential "hotspots," based on statistical models of GMD storm scenarios and power distribution grounding models that assume that the electrical conductivity of the Earth's crust and mantle varies only with depth. The NSF-supported EarthScope Magnetotelluric (MT) Program operated by Oregon State University has mapped 3-D ground electrical conductivity structure across more than half of the continental US. MT data, the naturally occurring time variations in the Earth's vector electric and magnetic fields at ground level, are used to determine the MT impedance tensor for each site (the ratio of horizontal vector electric and magnetic fields at ground level expressed as a complex-valued frequency domain quantity). The impedance provides information on the 3-D electrical conductivity structure of the Earth's crust and mantle. We demonstrate that use of 3-D ground conductivity information significantly improves the fidelity of GIC predictions over existing 1-D approaches. We project real-time magnetic field data streams from US Geological Survey magnetic observatories into a set of linear filters that employ the impedance data and that generate estimates of ground level electric fields at the locations of MT stations. The resulting ground electric fields are projected to and integrated along the path of power transmission lines. This serves as inputs to power flow models that represent the power transmission grid, yielding a time-varying set of quasi-real-time estimates of reactive power loss at the power transformers that are critical infrastructure for power distribution. We demonstrate that peak reactive power loss and hence peak risk for transformer damage from GICs does not necessarily occur during peak GMD storm times, but rather depends on the time-evolution of the polarization of the GMD's inducing fields

  15. Augmented reality and photogrammetry: A synergy to visualize physical and virtual city environments

    Science.gov (United States)

    Portalés, Cristina; Lerma, José Luis; Navarro, Santiago

    2010-01-01

    Close-range photogrammetry is based on the acquisition of imagery to make accurate measurements and, eventually, three-dimensional (3D) photo-realistic models. These models are a photogrammetric product per se. They are usually integrated into virtual reality scenarios where additional data such as sound, text or video can be introduced, leading to multimedia virtual environments. These environments allow users both to navigate and interact on different platforms such as desktop PCs, laptops and small hand-held devices (mobile phones or PDAs). In very recent years, a new technology derived from virtual reality has emerged: Augmented Reality (AR), which is based on mixing real and virtual environments to boost human interactions and real-life navigations. The synergy of AR and photogrammetry opens up new possibilities in the field of 3D data visualization, navigation and interaction far beyond the traditional static navigation and interaction in front of a computer screen. In this paper we introduce a low-cost outdoor mobile AR application to integrate buildings of different urban spaces. High-accuracy 3D photo-models derived from close-range photogrammetry are integrated in real (physical) urban worlds. The augmented environment that is presented herein requires for visualization a see-through video head mounted display (HMD), whereas user's movement navigation is achieved in the real world with the help of an inertial navigation sensor. After introducing the basics of AR technology, the paper will deal with real-time orientation and tracking in combined physical and virtual city environments, merging close-range photogrammetry and AR. There are, however, some software and complex issues, which are discussed in the paper.

  16. Using the CAVE virtual-reality environment as an aid to 3-D electromagnetic field computation

    International Nuclear Information System (INIS)

    Turner, L.R.; Levine, D.; Huang, M.; Papka, M.

    1995-01-01

    One of the major problems in three-dimensional (3-D) field computation is visualizing the resulting 3-D field distributions. A virtual-reality environment, such as the CAVE, (CAVE Automatic Virtual Environment) is helping to overcome this problem, thus making the results of computation more usable for designers and users of magnets and other electromagnetic devices. As a demonstration of the capabilities of the CAVE, the elliptical multipole wiggler (EMW), an insertion device being designed for the Advanced Photon Source (APS) now being commissioned at Argonne National Laboratory (ANL), wa made visible, along with its fields and beam orbits. Other uses of the CAVE in preprocessing and postprocessing computation for electromagnetic applications are also discussed

  17. Virtual surgical planning and 3D printing in prosthetic orbital reconstruction with percutaneous implants: a technical case report.

    Science.gov (United States)

    Huang, Yu-Hui; Seelaus, Rosemary; Zhao, Linping; Patel, Pravin K; Cohen, Mimis

    2016-01-01

    Osseointegrated titanium implants to the cranial skeleton for retention of facial prostheses have proven to be a reliable replacement for adhesive systems. However, improper placement of the implants can jeopardize prosthetic outcomes, and long-term success of an implant-retained prosthesis. Three-dimensional (3D) computer imaging, virtual planning, and 3D printing have become accepted components of the preoperative planning and design phase of treatment. Computer-aided design and computer-assisted manufacture that employ cone-beam computed tomography data offer benefits to patient treatment by contributing to greater predictability and improved treatment efficiencies with more reliable outcomes in surgical and prosthetic reconstruction. 3D printing enables transfer of the virtual surgical plan to the operating room by fabrication of surgical guides. Previous studies have shown that accuracy improves considerably with guided implantation when compared to conventional template or freehand implant placement. This clinical case report demonstrates the use of a 3D technological pathway for preoperative virtual planning through prosthesis fabrication, utilizing 3D printing, for a patient with an acquired orbital defect that was restored with an implant-retained silicone orbital prosthesis.

  18. 3D Image Display Courses for Information Media Students.

    Science.gov (United States)

    Yanaka, Kazuhisa; Yamanouchi, Toshiaki

    2016-01-01

    Three-dimensional displays are used extensively in movies and games. These displays are also essential in mixed reality, where virtual and real spaces overlap. Therefore, engineers and creators should be trained to master 3D display technologies. For this reason, the Department of Information Media at the Kanagawa Institute of Technology has launched two 3D image display courses specifically designed for students who aim to become information media engineers and creators.

  19. Teaching Basic Field Skills Using Screen-Based Virtual Reality Landscapes

    Science.gov (United States)

    Houghton, J.; Robinson, A.; Gordon, C.; Lloyd, G. E. E.; Morgan, D. J.

    2016-12-01

    We are using screen-based virtual reality landscapes, created using the Unity 3D game engine, to augment the training geoscience students receive in preparing for fieldwork. Students explore these landscapes as they would real ones, interacting with virtual outcrops to collect data, determine location, and map the geology. Skills for conducting field geological surveys - collecting, plotting and interpreting data; time management and decision making - are introduced interactively and intuitively. As with real landscapes, the virtual landscapes are open-ended terrains with embedded data. This means the game does not structure student interaction with the information as it is through experience the student learns the best methods to work successfully and efficiently. These virtual landscapes are not replacements for geological fieldwork rather virtual spaces between classroom and field in which to train and reinforcement essential skills. Importantly, these virtual landscapes offer accessible parallel provision for students unable to visit, or fully partake in visiting, the field. The project has received positive feedback from both staff and students. Results show students find it easier to focus on learning these basic field skills in a classroom, rather than field setting, and make the same mistakes as when learning in the field, validating the realistic nature of the virtual experience and providing opportunity to learn from these mistakes. The approach also saves time, and therefore resources, in the field as basic skills are already embedded. 70% of students report increased confidence with how to map boundaries and 80% have found the virtual training a useful experience. We are also developing landscapes based on real places with 3D photogrammetric outcrops, and a virtual urban landscape in which Engineering Geology students can conduct a site investigation. This project is a collaboration between the University of Leeds and Leeds College of Art, UK, and all

  20. Real-Time 3-Dimensional Ultrasound-Assisted Infraclavicular Brachial Plexus Catheter Placement: Implications of a New Technology

    Directory of Open Access Journals (Sweden)

    Steven R. Clendenen

    2010-01-01

    Full Text Available Background. There are a variety of techniques for targeting placement of an infraclavicular blockade; these include eliciting paresthesias, nerve stimulation, and 2-dimensional (2D ultrasound (US guidance. Current 2D US allows direct visualization of a “flat” image of the advancing needle and neurovascular structures but without the ability to extensively analyze multidimensional data and allow for real-time manipulation. Three-dimensional (3D ultrasonography has gained popularity and usefulness in many clinical specialties such as obstetrics and cardiology. We describe some of the potential clinical applications of 3D US in regional anesthesia. Methods. This case represents an infraclavicular catheter placement facilitated by 3D US, which demonstrates 360-degree spatial relationships of the entire anatomic region. Results. The block needle, peripheral nerve catheter, and local anesthetic diffusion were observed in multiple planes of view without manipulation of the US probe. Conclusion. Advantages of 3D US may include the ability to confirm correct needle and catheter placement prior to the injection of local anesthetic. The spread of local anesthetic along the length of the nerve can be easily observed while manipulating the 3D images in real-time by simply rotating the trackball on the US machine to provide additional information that cannot be identified with 2D US alone.

  1. Acquiring 3D scene information from 2D images

    NARCIS (Netherlands)

    Li, Ping

    2011-01-01

    In recent years, people are becoming increasingly acquainted with 3D technologies such as 3DTV, 3D movies and 3D virtual navigation of city environments in their daily life. Commercial 3D movies are now commonly available for consumers. Virtual navigation of our living environment as used on a

  2. The Learner Characteristics, Features of Desktop 3D Virtual Reality Environments, and College Chemistry Instruction: A Structural Equation Modeling Analysis

    Science.gov (United States)

    Merchant, Zahira; Goetz, Ernest T.; Keeney-Kennicutt, Wendy; Kwok, Oi-man; Cifuentes, Lauren; Davis, Trina J.

    2012-01-01

    We examined a model of the impact of a 3D desktop virtual reality environment on the learner characteristics (i.e. perceptual and psychological variables) that can enhance chemistry-related learning achievements in an introductory college chemistry class. The relationships between the 3D virtual reality features and the chemistry learning test as…

  3. Real-time modulation of visual feedback on human full-body movements in a virtual mirror: development and proof-of-concept.

    Science.gov (United States)

    Roosink, Meyke; Robitaille, Nicolas; McFadyen, Bradford J; Hébert, Luc J; Jackson, Philip L; Bouyer, Laurent J; Mercier, Catherine

    2015-01-05

    Virtual reality (VR) provides interactive multimodal sensory stimuli and biofeedback, and can be a powerful tool for physical and cognitive rehabilitation. However, existing systems have generally not implemented realistic full-body avatars and/or a scaling of visual movement feedback. We developed a "virtual mirror" that displays a realistic full-body avatar that responds to full-body movements in all movement planes in real-time, and that allows for the scaling of visual feedback on movements in real-time. The primary objective of this proof-of-concept study was to assess the ability of healthy subjects to detect scaled feedback on trunk flexion movements. The "virtual mirror" was developed by integrating motion capture, virtual reality and projection systems. A protocol was developed to provide both augmented and reduced feedback on trunk flexion movements while sitting and standing. The task required reliance on both visual and proprioceptive feedback. The ability to detect scaled feedback was assessed in healthy subjects (n = 10) using a two-alternative forced choice paradigm. Additionally, immersion in the VR environment and task adherence (flexion angles, velocity, and fluency) were assessed. The ability to detect scaled feedback could be modelled using a sigmoid curve with a high goodness of fit (R2 range 89-98%). The point of subjective equivalence was not significantly different from 0 (i.e. not shifted), indicating an unbiased perception. The just noticeable difference was 0.035 ± 0.007, indicating that subjects were able to discriminate different scaling levels consistently. VR immersion was reported to be good, despite some perceived delays between movements and VR projections. Movement kinematic analysis confirmed task adherence. The new "virtual mirror" extends existing VR systems for motor and pain rehabilitation by enabling the use of realistic full-body avatars and scaled feedback. Proof-of-concept was demonstrated for the assessment of

  4. Real and virtual explorations of the environment and interactive tracking of movable objects for the blind on the basis of tactile-acoustical maps and 3D environment models.

    Science.gov (United States)

    Hub, Andreas; Hartter, Tim; Kombrink, Stefan; Ertl, Thomas

    2008-01-01

    PURPOSE.: This study describes the development of a multi-functional assistant system for the blind which combines localisation, real and virtual navigation within modelled environments and the identification and tracking of fixed and movable objects. The approximate position of buildings is determined with a global positioning sensor (GPS), then the user establishes exact position at a specific landmark, like a door. This location initialises indoor navigation, based on an inertial sensor, a step recognition algorithm and map. Tracking of movable objects is provided by another inertial sensor and a head-mounted stereo camera, combined with 3D environmental models. This study developed an algorithm based on shape and colour to identify objects and used a common face detection algorithm to inform the user of the presence and position of others. The system allows blind people to determine their position with approximately 1 metre accuracy. Virtual exploration of the environment can be accomplished by moving one's finger on a touch screen of a small portable tablet PC. The name of rooms, building features and hazards, modelled objects and their positions are presented acoustically or in Braille. Given adequate environmental models, this system offers blind people the opportunity to navigate independently and safely, even within unknown environments. Additionally, the system facilitates education and rehabilitation by providing, in several languages, object names, features and relative positions.

  5. Virtual embryology: a 3D library reconstructed from human embryo sections and animation of development process.

    Science.gov (United States)

    Komori, M; Miura, T; Shiota, K; Minato, K; Takahashi, T

    1995-01-01

    The volumetric shape of a human embryo and its development is hard to comprehend as they have been viewed as a 2D schemes in a textbook or microscopic sectional image. In this paper, a CAI and research support system for human embryology using multimedia presentation techniques is described. In this system, 3D data is acquired from a series of sliced specimens. Its 3D structure can be viewed interactively by rotating, extracting, and truncating its whole body or organ. Moreover, the development process of embryos can be animated using a morphing technique applied to the specimen in several stages. The system is intended to be used interactively, like a virtual reality system. Hence, the system is called Virtual Embryology.

  6. VC and ACIS/HOOPS based semi-physical virtual prototype design and motion simulation of 2D scanning mirror

    Science.gov (United States)

    Liu, Xiangyan; Dai, Xiaobing; He, Xudong; Gao, Pengcheng

    2013-10-01

    Image-spectrum integrated instrument is an infrared scanning system which integrates optics, mechanics, electrics and information processing. Not only can it achieve scene imaging, but also it can detect, track and identify targets of interests in the scene through acquiring their spectra. After having a brief introduction to image-spectrum integrated instrument and analyzing how 2D scanning mirror works, this paper built 3D model of 2D scanning mirror and simulated its motion using two PCs basing on VC++ and ACIS/HOOPS. Two PCs communicate with each other through serial ports. One PC serves as host computer, on which controlling software runs, is responsible for loading image sequence, image processing, target detecting, and generating and sending motion commands to scanning mirror. The other serves as slave computer, on which scanning mirror motion simulation software runs, is responsible for receiving motion commands to control scanning mirror to finish corresponding movements. This method proposed in this paper adopted semi-physical virtual prototype technology and used real scene image sequence to control virtual 2D scanning mirror and simulates motion of real 2D scanning mirror. It has no need for real scanning mirror and is of important practical significance for debugging controlling software of 2D scanning mirror.

  7. The perception of spatial layout in real and virtual worlds.

    Science.gov (United States)

    Arthur, E J; Hancock, P A; Chrysler, S T

    1997-01-01

    As human-machine interfaces grow more immersive and graphically-oriented, virtual environment systems become more prominent as the medium for human-machine communication. Often, virtual environments (VE) are built to provide exact metrical representations of existing or proposed physical spaces. However, it is not known how individuals develop representational models of these spaces in which they are immersed and how those models may be distorted with respect to both the virtual and real-world equivalents. To evaluate the process of model development, the present experiment examined participant's ability to reproduce a complex spatial layout of objects having experienced them previously under different viewing conditions. The layout consisted of nine common objects arranged on a flat plane. These objects could be viewed in a free binocular virtual condition, a free binocular real-world condition, and in a static monocular view of the real world. The first two allowed active exploration of the environment while the latter condition allowed the participant only a passive opportunity to observe from a single viewpoint. Viewing conditions were a between-subject variable with 10 participants randomly assigned to each condition. Performance was assessed using mapping accuracy and triadic comparisons of relative inter-object distances. Mapping results showed a significant effect of viewing condition where, interestingly, the static monocular condition was superior to both the active virtual and real binocular conditions. Results for the triadic comparisons showed a significant interaction for gender by viewing condition in which males were more accurate than females. These results suggest that the situation model resulting from interaction with a virtual environment was indistinguishable from interaction with real objects at least within the constraints of the present procedure.

  8. D-VASim: An Interactive Virtual Laboratory Environment for the Simulation and Analysis of Genetic Circuits

    DEFF Research Database (Denmark)

    Baig, Hasan; Madsen, Jan

    2016-01-01

    runtime. The runtime interaction gives the user a feeling of being in the lab performing a real world experiment. In this work, we present a user-friendly software tool named D-VASim (Dynamic Virtual Analyzer and Simulator), which provides a virtual laboratory environment to simulate and analyze...

  9. Virtual Team Work : Group Decision Making in 3D Virtual Environments

    NARCIS (Netherlands)

    Schouten, A.P.; van den Hooff, B.; Feldberg, F.

    2016-01-01

    This study investigates how three-dimensional virtual environments (3DVEs) support shared understanding and group decision making. Based on media synchronicity theory, we pose that the shared environment and avatar-based interaction allowed by 3DVEs aid convergence processes in teams working on a

  10. Virtual Team Work : Group Decision Making in 3D Virtual Environments

    NARCIS (Netherlands)

    Schouten, Alexander P.; van den Hooff, Bart; Feldberg, Frans

    This study investigates how three-dimensional virtual environments (3DVEs) support shared understanding and group decision making. Based on media synchronicity theory, we pose that the shared environment and avatar-based interaction allowed by 3DVEs aid convergence processes in teams working on a

  11. Tactile display for virtual 3D shape rendering

    CERN Document Server

    Mansutti, Alessandro; Bordegoni, Monica; Cugini, Umberto

    2017-01-01

    This book describes a novel system for the simultaneous visual and tactile rendering of product shapes which allows designers to simultaneously touch and see new product shapes during the conceptual phase of product development. This system offers important advantages, including potential cost and time savings, compared with the standard product design process in which digital 3D models and physical prototypes are often repeatedly modified until an optimal design is achieved. The system consists of a tactile display that is able to represent, within a real environment, the shape of a product. Designers can explore the rendered surface by touching curves lying on the product shape, selecting those curves that can be considered style features and evaluating their aesthetic quality. In order to physically represent these selected curves, a flexible surface is modeled by means of servo-actuated modules controlling a physical deforming strip. The tactile display is designed so as to be portable, low cost, modular,...

  12. An inkjet-printed buoyant 3-D lagrangian sensor for real-time flood monitoring

    KAUST Repository

    Farooqui, Muhammad Fahad; Claudel, Christian G.; Shamim, Atif

    2014-01-01

    A 3-D (cube-shaped) Lagrangian sensor, inkjet printed on a paper substrate, is presented for the first time. The sensor comprises a transmitter chip with a microcontroller completely embedded in the cube, along with a $1.5 \\lambda 0 dipole

  13. Web3D Technologies in Learning, Education and Training: Motivations, Issues, Opportunities

    Science.gov (United States)

    Chittaro, Luca; Ranon, Roberto

    2007-01-01

    Web3D open standards allow the delivery of interactive 3D virtual learning environments through the Internet, reaching potentially large numbers of learners worldwide, at any time. This paper introduces the educational use of virtual reality based on Web3D technologies. After briefly presenting the main Web3D technologies, we summarize the…

  14. A Method for Teaching the Modeling of Manikins Suitable for Third-Person 3-D Virtual Worlds and Games

    Directory of Open Access Journals (Sweden)

    Nick V. Flor

    2012-08-01

    Full Text Available Virtual Worlds have the potential to transform the way people learn, work, and play. With the emerging fields of service science and design science, professors and students at universities are in a unique position to lead the research and development of innovative and value-adding virtual worlds. However, a key barrier in the development of virtual worlds—especially for business, technical, and non-artistic students—is the ability to model human figures in 3-D for use as avatars and automated characters in virtual worlds. There are no articles in either research or teaching journals which describe methods that non-artists can use to create 3-D human figures. This paper presents a repeatable and flexible method I have taught successfully to both artists and business students, which allows them to quickly model human-like figures (manikins that are sufficient for prototype purposes and that allows students and researchers alike to explore the development of new kinds of virtual worlds.

  15. Real-time three-dimensional soft tissue reconstruction for laparoscopic surgery.

    Science.gov (United States)

    Kowalczuk, Jędrzej; Meyer, Avishai; Carlson, Jay; Psota, Eric T; Buettner, Shelby; Pérez, Lance C; Farritor, Shane M; Oleynikov, Dmitry

    2012-12-01

    Accurate real-time 3D models of the operating field have the potential to enable augmented reality for endoscopic surgery. A new system is proposed to create real-time 3D models of the operating field that uses a custom miniaturized stereoscopic video camera attached to a laparoscope and an image-based reconstruction algorithm implemented on a graphics processing unit (GPU). The proposed system was evaluated in a porcine model that approximates the viewing conditions of in vivo surgery. To assess the quality of the models, a synthetic view of the operating field was produced by overlaying a color image on the reconstructed 3D model, and an image rendered from the 3D model was compared with a 2D image captured from the same view. Experiments conducted with an object of known geometry demonstrate that the system produces 3D models accurate to within 1.5 mm. The ability to produce accurate real-time 3D models of the operating field is a significant advancement toward augmented reality in minimally invasive surgery. An imaging system with this capability will potentially transform surgery by helping novice and expert surgeons alike to delineate variance in internal anatomy accurately.

  16. Understanding the Differences of the Cognition Gained from Real and Virtual Tourism based on the Narrative Theory

    Directory of Open Access Journals (Sweden)

    Azam Ravadrad

    2010-06-01

    Full Text Available Communication technologies today are greatly developed to the extent that obtaining information from different parts of the world is no more only depended to real and physical traveling. People are now able to travel as far as they want and whenever they wish using cyberspace, while sitting at home. Tourism in this condition is no more dependent on time, place and financial planning. Two important questions raise here, first, can virtual tourism replace the real tourism and eliminate the need for it? Secondly, could cognition produced by the virtual tourism be the same as the cognition formed by the real tourism? To answer these questions, defining the characteristics of virtual and real tourism is needed. The main basis of this comparison is being in special place and an experimental sense of being in that place, in the real tourism, on one hand, and selectivity of places and receiving packaged information in the virtual tourism, on the other. This paper claims that although the virtual tourism could offer vast and complete information to the tourist, but in reality it lacks sense of being in place and lived experience. For these reasons, the obtained cognition is manipulated and unreal. Secondly, this type of tourism can be considered only as a complement to the real tourism. A tourism that begins with virtual space and leads to the real world could have positive and better consequences of both spaces on the process of cognition.

  17. Exploring Non-Traditional Learning Methods in Virtual and Real-World Environments

    Science.gov (United States)

    Lukman, Rebeka; Krajnc, Majda

    2012-01-01

    This paper identifies the commonalities and differences within non-traditional learning methods regarding virtual and real-world environments. The non-traditional learning methods in real-world have been introduced within the following courses: Process Balances, Process Calculation, and Process Synthesis, and within the virtual environment through…

  18. Three-dimensional liver motion tracking using real-time two-dimensional MRI.

    Science.gov (United States)

    Brix, Lau; Ringgaard, Steffen; Sørensen, Thomas Sangild; Poulsen, Per Rugaard

    2014-04-01

    Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (or tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Axial, sagittal, and coronal 2D MRI series

  19. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    Energy Technology Data Exchange (ETDEWEB)

    Brix, Lau, E-mail: lau.brix@stab.rm.dk [Department of Procurement and Clinical Engineering, Region Midt, Olof Palmes Allé 15, 8200 Aarhus N, Denmark and MR Research Centre, Aarhus University Hospital, Skejby, Brendstrupgaardsvej 100, 8200 Aarhus N (Denmark); Ringgaard, Steffen [MR Research Centre, Aarhus University Hospital, Skejby, Brendstrupgaardsvej 100, 8200 Aarhus N (Denmark); Sørensen, Thomas Sangild [Department of Computer Science, Aarhus University, Aabogade 34, 8200 Aarhus N, Denmark and Department of Clinical Medicine, Aarhus University, Brendstrupgaardsvej 100, 8200 Aarhus N (Denmark); Poulsen, Per Rugaard [Department of Clinical Medicine, Aarhus University, Brendstrupgaardsvej 100, 8200 Aarhus N, Denmark and Department of Oncology, Aarhus University Hospital, Nørrebrogade 44, 8000 Aarhus C (Denmark)

    2014-04-15

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (or tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal

  20. Three-dimensional liver motion tracking using real-time two-dimensional MRI

    International Nuclear Information System (INIS)

    Brix, Lau; Ringgaard, Steffen; Sørensen, Thomas Sangild; Poulsen, Per Rugaard

    2014-01-01

    Purpose: Combined magnetic resonance imaging (MRI) systems and linear accelerators for radiotherapy (MR-Linacs) are currently under development. MRI is noninvasive and nonionizing and can produce images with high soft tissue contrast. However, new tracking methods are required to obtain fast real-time spatial target localization. This study develops and evaluates a method for tracking three-dimensional (3D) respiratory liver motion in two-dimensional (2D) real-time MRI image series with high temporal and spatial resolution. Methods: The proposed method for 3D tracking in 2D real-time MRI series has three steps: (1) Recording of a 3D MRI scan and selection of a blood vessel (or tumor) structure to be tracked in subsequent 2D MRI series. (2) Generation of a library of 2D image templates oriented parallel to the 2D MRI image series by reslicing and resampling the 3D MRI scan. (3) 3D tracking of the selected structure in each real-time 2D image by finding the template and template position that yield the highest normalized cross correlation coefficient with the image. Since the tracked structure has a known 3D position relative to each template, the selection and 2D localization of a specific template translates into quantification of both the through-plane and in-plane position of the structure. As a proof of principle, 3D tracking of liver blood vessel structures was performed in five healthy volunteers in two 5.4 Hz axial, sagittal, and coronal real-time 2D MRI series of 30 s duration. In each 2D MRI series, the 3D localization was carried out twice, using nonoverlapping template libraries, which resulted in a total of 12 estimated 3D trajectories per volunteer. Validation tests carried out to support the tracking algorithm included quantification of the breathing induced 3D liver motion and liver motion directionality for the volunteers, and comparison of 2D MRI estimated positions of a structure in a watermelon with the actual positions. Results: Axial, sagittal

  1. HERRAMIENTAS EN 3D PARA EL MODELADO DE ESCENARIOS VIRTUALES BASADOS EN LOGO. ESTADO DEL ARTE

    Directory of Open Access Journals (Sweden)

    Luz Santamaría Granados

    2009-01-01

    Full Text Available Este artículo revisa la comprobada fundamentación pedagógica de LOGO (Papert, 2003 que a su vez ofrece interesantes estrategias de motivación para los niños, en aspectos tales como el desarrollo de habilidades espaciales a través de su propia exploración de mundos virtuales. La metodología original fue propuesta por Seymour Papert para escenarios en dos dimensiones (2D. Por lo tanto, se analiza la posibilidad de integrar las ventajas pedagógicas de LOGO con una interfaz gráfica en tres dimensiones (3D, al aprovechar la tecnología contemplada en los estándares del consorcio Web3D. Además menciona los componentes X3D que permiten el uso de avatares (humanoides para facilitar la interacción de los usuarios en mundos virtuales dinámicos, al disponer de personajes adicionales al de la tortuga de LOGO.

  2. [Application of 3D virtual reality technology with multi-modality fusion in resection of glioma located in central sulcus region].

    Science.gov (United States)

    Chen, T N; Yin, X T; Li, X G; Zhao, J; Wang, L; Mu, N; Ma, K; Huo, K; Liu, D; Gao, B Y; Feng, H; Li, F

    2018-05-08

    Objective: To explore the clinical and teaching application value of virtual reality technology in preoperative planning and intraoperative guide of glioma located in central sulcus region. Method: Ten patients with glioma in the central sulcus region were proposed to surgical treatment. The neuro-imaging data, including CT, CTA, DSA, MRI, fMRI were input to 3dgo sczhry workstation for image fusion and 3D reconstruction. Spatial relationships between the lesions and the surrounding structures on the virtual reality image were obtained. These images were applied to the operative approach design, operation process simulation, intraoperative auxiliary decision and the training of specialist physician. Results: Intraoperative founding of 10 patients were highly consistent with preoperative simulation with virtual reality technology. Preoperative 3D reconstruction virtual reality images improved the feasibility of operation planning and operation accuracy. This technology had not only shown the advantages for neurological function protection and lesion resection during surgery, but also improved the training efficiency and effectiveness of dedicated physician by turning the abstract comprehension to virtual reality. Conclusion: Image fusion and 3D reconstruction based virtual reality technology in glioma resection is helpful for formulating the operation plan, improving the operation safety, increasing the total resection rate, and facilitating the teaching and training of the specialist physician.

  3. Researching on Real 3d Modeling Constructed with the Oblique Photogrammetry and Terrestrial Photogrammetry

    Science.gov (United States)

    Han, Youmei; Jiao, Minglian; Shijuan

    2018-04-01

    With the rapid development of the oblique photogrammetry, many cities have built some real 3D model with this technology. Although it has the advantages of short period, high efficiency and good air angle effect, the near ground view angle of these real 3D models are not very good. With increasing development of smart cities, the requirements of reality, practicality and accuracy on real 3D models are becoming higher. How to produce and improve the real 3D models quickly has become one of the hot research directions of geospatial information. To meet this requirement In this paper, Combined with the characteristics of current oblique photogrammetry modeling and the terrestrial photogrammetry, we proposed a new technological process, which consists of close range sensor design, data acquisition and processing. The proposed method is being tested by using oblique photography images acquired. The results confirm the effectiveness of the proposed approach.

  4. Experiential Virtual Scenarios With Real-Time Monitoring (Interreality) for the Management of Psychological Stress: A Block Randomized Controlled Trial

    Science.gov (United States)

    Pallavicini, Federica; Morganti, Luca; Serino, Silvia; Scaratti, Chiara; Briguglio, Marilena; Crifaci, Giulia; Vetrano, Noemi; Giulintano, Annunziata; Bernava, Giuseppe; Tartarisco, Gennaro; Pioggia, Giovanni; Raspelli, Simona; Cipresso, Pietro; Vigna, Cinzia; Grassi, Alessandra; Baruffi, Margherita; Wiederhold, Brenda; Riva, Giuseppe

    2014-01-01

    Background The recent convergence between technology and medicine is offering innovative methods and tools for behavioral health care. Among these, an emerging approach is the use of virtual reality (VR) within exposure-based protocols for anxiety disorders, and in particular posttraumatic stress disorder. However, no systematically tested VR protocols are available for the management of psychological stress. Objective Our goal was to evaluate the efficacy of a new technological paradigm, Interreality, for the management and prevention of psychological stress. The main feature of Interreality is a twofold link between the virtual and the real world achieved through experiential virtual scenarios (fully controlled by the therapist, used to learn coping skills and improve self-efficacy) with real-time monitoring and support (identifying critical situations and assessing clinical change) using advanced technologies (virtual worlds, wearable biosensors, and smartphones). Methods The study was designed as a block randomized controlled trial involving 121 participants recruited from two different worker populations—teachers and nurses—that are highly exposed to psychological stress. Participants were a sample of teachers recruited in Milan (Block 1: n=61) and a sample of nurses recruited in Messina, Italy (Block 2: n=60). Participants within each block were randomly assigned to the (1) Experimental Group (EG): n=40; B1=20, B2=20, which received a 5-week treatment based on the Interreality paradigm; (2) Control Group (CG): n=42; B1=22, B2=20, which received a 5-week traditional stress management training based on cognitive behavioral therapy (CBT); and (3) the Wait-List group (WL): n=39, B1=19, B2=20, which was reassessed and compared with the two other groups 5 weeks after the initial evaluation. Results Although both treatments were able to significantly reduce perceived stress better than WL, only EG participants reported a significant reduction (EG=12% vs CG=0

  5. Experiential virtual scenarios with real-time monitoring (interreality) for the management of psychological stress: a block randomized controlled trial.

    Science.gov (United States)

    Gaggioli, Andrea; Pallavicini, Federica; Morganti, Luca; Serino, Silvia; Scaratti, Chiara; Briguglio, Marilena; Crifaci, Giulia; Vetrano, Noemi; Giulintano, Annunziata; Bernava, Giuseppe; Tartarisco, Gennaro; Pioggia, Giovanni; Raspelli, Simona; Cipresso, Pietro; Vigna, Cinzia; Grassi, Alessandra; Baruffi, Margherita; Wiederhold, Brenda; Riva, Giuseppe

    2014-07-08

    The recent convergence between technology and medicine is offering innovative methods and tools for behavioral health care. Among these, an emerging approach is the use of virtual reality (VR) within exposure-based protocols for anxiety disorders, and in particular posttraumatic stress disorder. However, no systematically tested VR protocols are available for the management of psychological stress. Our goal was to evaluate the efficacy of a new technological paradigm, Interreality, for the management and prevention of psychological stress. The main feature of Interreality is a twofold link between the virtual and the real world achieved through experiential virtual scenarios (fully controlled by the therapist, used to learn coping skills and improve self-efficacy) with real-time monitoring and support (identifying critical situations and assessing clinical change) using advanced technologies (virtual worlds, wearable biosensors, and smartphones). The study was designed as a block randomized controlled trial involving 121 participants recruited from two different worker populations-teachers and nurses-that are highly exposed to psychological stress. Participants were a sample of teachers recruited in Milan (Block 1: n=61) and a sample of nurses recruited in Messina, Italy (Block 2: n=60). Participants within each block were randomly assigned to the (1) Experimental Group (EG): n=40; B1=20, B2=20, which received a 5-week treatment based on the Interreality paradigm; (2) Control Group (CG): n=42; B1=22, B2=20, which received a 5-week traditional stress management training based on cognitive behavioral therapy (CBT); and (3) the Wait-List group (WL): n=39, B1=19, B2=20, which was reassessed and compared with the two other groups 5 weeks after the initial evaluation. Although both treatments were able to significantly reduce perceived stress better than WL, only EG participants reported a significant reduction (EG=12% vs. CG=0.5%) in chronic "trait" anxiety. A similar

  6. Accuracy of Real-time Couch Tracking During 3-dimensional Conformal Radiation Therapy, Intensity Modulated Radiation Therapy, and Volumetric Modulated Arc Therapy for Prostate Cancer

    International Nuclear Information System (INIS)

    Wilbert, Juergen; Baier, Kurt; Hermann, Christian; Flentje, Michael; Guckenberger, Matthias

    2013-01-01

    Purpose: To evaluate the accuracy of real-time couch tracking for prostate cancer. Methods and Materials: Intrafractional motion trajectories of 15 prostate cancer patients were the basis for this phantom study; prostate motion had been monitored with the Calypso System. An industrial robot moved a phantom along these trajectories, motion was detected via an infrared camera system, and the robotic HexaPOD couch was used for real-time counter-steering. Residual phantom motion during real-time tracking was measured with the infrared camera system. Film dosimetry was performed during delivery of 3-dimensional conformal radiation therapy (3D-CRT), step-and-shoot intensity modulated radiation therapy (IMRT), and volumetric modulated arc therapy (VMAT). Results: Motion of the prostate was largest in the anterior–posterior direction, with systematic (∑) and random (σ) errors of 2.3 mm and 2.9 mm, respectively; the prostate was outside a threshold of 5 mm (3D vector) for 25.0%±19.8% of treatment time. Real-time tracking reduced prostate motion to ∑=0.01 mm and σ = 0.55 mm in the anterior–posterior direction; the prostate remained within a 1-mm and 5-mm threshold for 93.9%±4.6% and 99.7%±0.4% of the time, respectively. Without real-time tracking, pass rates based on a γ index of 2%/2 mm in film dosimetry ranged between 66% and 72% for 3D-CRT, IMRT, and VMAT, on average. Real-time tracking increased pass rates to minimum 98% on average for 3D-CRT, IMRT, and VMAT. Conclusions: Real-time couch tracking resulted in submillimeter accuracy for prostate cancer, which transferred into high dosimetric accuracy independently of whether 3D-CRT, IMRT, or VMAT was used.

  7. Virtual patient 3D dose reconstruction using in air EPID measurements and a back-projection algorithm for IMRT and VMAT treatments.

    Science.gov (United States)

    Olaciregui-Ruiz, Igor; Rozendaal, Roel; van Oers, René F M; Mijnheer, Ben; Mans, Anton

    2017-05-01

    At our institute, a transit back-projection algorithm is used clinically to reconstruct in vivo patient and in phantom 3D dose distributions using EPID measurements behind a patient or a polystyrene slab phantom, respectively. In this study, an extension to this algorithm is presented whereby in air EPID measurements are used in combination with CT data to reconstruct 'virtual' 3D dose distributions. By combining virtual and in vivo patient verification data for the same treatment, patient-related errors can be separated from machine, planning and model errors. The virtual back-projection algorithm is described and verified against the transit algorithm with measurements made behind a slab phantom, against dose measurements made with an ionization chamber and with the OCTAVIUS 4D system, as well as against TPS patient data. Virtual and in vivo patient dose verification results are also compared. Virtual dose reconstructions agree within 1% with ionization chamber measurements. The average γ-pass rate values (3% global dose/3mm) in the 3D dose comparison with the OCTAVIUS 4D system and the TPS patient data are 98.5±1.9%(1SD) and 97.1±2.9%(1SD), respectively. For virtual patient dose reconstructions, the differences with the TPS in median dose to the PTV remain within 4%. Virtual patient dose reconstruction makes pre-treatment verification based on deviations of DVH parameters feasible and eliminates the need for phantom positioning and re-planning. Virtual patient dose reconstructions have additional value in the inspection of in vivo deviations, particularly in situations where CBCT data is not available (or not conclusive). Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  8. Assessing the Value of Real-life Brands in Virtual Worlds

    DEFF Research Database (Denmark)

    Mattsson, Jan; Barnes, Stuart; Hartley, Nicole

    2015-01-01

    World. A key finding is the difficulty in creating emotional brand value in Second Life which has serious implications for the sustainability of current real-life brands in Virtual Worlds. The paper rounds off with conclusions and implications for future research and practice in this very new area.......Virtual Worlds are a significant new market environment for brand-building through experiential customer service interactions. Using value theory, this paper aims to assess the experiential brand value of real-life brands that have moved to the Virtual World of Second Life. A key premise...... is that current brand offerings in Virtual Worlds do not offer consumers adequate experiential value. The results demonstrate both the validity of an axiological approach to examining brand value, and highlight significant problems in consumer perceptions of the experiential value of brands within the Virtual...

  9. R3D3 in the Wild: Using A Robot for Turn Management in Multi-Party Interaction with a Virtual Human

    NARCIS (Netherlands)

    Theune, Mariet; Wiltenburg, Daan; Bode, Max; Linssen, Jeroen

    R3D3 is a combination of a virtual human with a non-speaking robot capable of head gestures and emotive gaze behaviour. We use the robot to implement various turn management functions for use in multi-party interaction with R3D3, and present the results of a field study investigating their effects

  10. Face customization in a real-time digiTV stream

    Science.gov (United States)

    Lugmayr, Artur R.; Creutzburg, Reiner; Kalli, Seppo; Tsoumanis, Andreas

    2002-03-01

    The challenge in digital, interactive TV (digiTV) is to move the consumer from the refiguration state to the configuration state, where he can influence the story flow, the choice of characters and other narrative elements. Besides restructuring narrative and interactivity methodologies, one major task is content manipulation to provide the auditorium the ability to predefine actors that it wants to have in its virtual story universe. Current solutions in broadcasting video provide content as monolithic structure, composed of graphics, narration, special effects, etc. compressed into one high bit rate MPEG-2 stream. More personalized and interactive TV requires a contemporary approach to segment video data in real-time to customize contents. Our research work emphasizes techniques for interchanging faces/bodies against virtual anchors in real-time constrained broadcasted video streams. The aim of our research paper is to show and point out solutions for realizing real-time face and avatar customization. The major task for the broadcaster is metadata extraction by applying face detection/tracking/recognition algorithms, and transmission of the information to the client side. At the client side, our system shall provide the facility to pre-select virtual avatars stored in a local database, and synchronize movements and expressions with the current digiTV contents.

  11. PAST AND FUTURE APPLICATIONS OF 3-D (VIRTUAL REALITY) TECHNOLOGY

    OpenAIRE

    Nigel Foreman; Liliya Korallo

    2014-01-01

    Virtual Reality (virtual environment technology, VET) has been widely available for twenty years. In that time, the benefits of using virtual environments (VEs) have become clear in many areas of application, including assessment and training, education, rehabilitation and psychological research in spatial cognition. The flexibility, reproducibility and adaptability of VEs are especially important, particularly in the training and testing of navigational and way-finding skills. Transfer of tr...

  12. Shoulder kinematics and spatial pattern of trapezius electromyographic activity in real and virtual environments.

    Directory of Open Access Journals (Sweden)

    Afshin Samani

    Full Text Available The design of an industrial workstation tends to include ergonomic assessment steps based on a digital mock-up and a virtual reality setup. Lack of interaction and system fidelity is often reported as a main issue in such virtual reality applications. This limitation is a crucial issue as thorough ergonomic analysis is required for an investigation of the biomechanics. In the current study, we investigated the biomechanical responses of the shoulder joint in a simulated assembly task for comparison with the biomechanical responses in virtual environments. Sixteen male healthy novice subjects performed the task on three different platforms: real (RE, virtual (VE, and virtual environment with force feedback (VEF with low and high precision demands. The subjects repeated the task 12 times (i.e., 12 cycles. High density electromyography from the upper trapezius and rotation angles of the shoulder joint were recorded and split into the cycles. The angular trajectories and velocity profiles of the shoulder joint angles over a cycle were computed in 3D. The inter-subject similarity in terms of normalized mutual information on kinematics and electromyography was investigated. Compared with RE the task in VE and VEF was characterized by lower kinematic maxima. The inter-subject similarity in RE compared with intra-subject similarity across the platforms was lower in terms of movement trajectories and greater in terms of trapezius muscle activation. The precision demand resulted in lower inter- and intra-subject similarity across platforms. The proposed approach identifies biomechanical differences in the shoulder joint in both VE and VEF compared with the RE platform, but these differences are less marked in VE mostly due to technical limitations of co-localizing the force feedback system in the VEF platform.

  13. Creating 3D models of historical buildings using geospatial data

    Science.gov (United States)

    Alionescu, Adrian; Bǎlǎ, Alina Corina; Brebu, Floarea Maria; Moscovici, Anca-Maria

    2017-07-01

    Recently, a lot of interest has been shown to understand a real world object by acquiring its 3D images of using laser scanning technology and panoramic images. A realistic impression of geometric 3D data can be generated by draping real colour textures simultaneously captured by a colour camera images. In this context, a new concept of geospatial data acquisition has rapidly revolutionized the method of determining the spatial position of objects, which is based on panoramic images. This article describes an approach that comprises inusing terrestrial laser scanning and panoramic images captured with Trimble V10 Imaging Rover technology to enlarge the details and realism of the geospatial data set, in order to obtain 3D urban plans and virtual reality applications.

  14. RESEARCHING ON REAL 3D MODELING CONSTRUCTED WITH THE OBLIQUE PHOTOGRAMMETRY AND TERRESTRIAL PHOTOGRAMMETRY

    Directory of Open Access Journals (Sweden)

    Y. Han

    2018-04-01

    Full Text Available With the rapid development of the oblique photogrammetry, many cities have built some real 3D model with this technology. Although it has the advantages of short period, high efficiency and good air angle effect, the near ground view angle of these real 3D models are not very good. With increasing development of smart cities, the requirements of reality, practicality and accuracy on real 3D models are becoming higher. How to produce and improve the real 3D models quickly has become one of the hot research directions of geospatial information. To meet this requirement In this paper, Combined with the characteristics of current oblique photogrammetry modeling and the terrestrial photogrammetry, we proposed a new technological process, which consists of close range sensor design, data acquisition and processing. The proposed method is being tested by using oblique photography images acquired. The results confirm the effectiveness of the proposed approach.

  15. A Dynamic Bioinspired Neural Network Based Real-Time Path Planning Method for Autonomous Underwater Vehicles.

    Science.gov (United States)

    Ni, Jianjun; Wu, Liuying; Shi, Pengfei; Yang, Simon X

    2017-01-01

    Real-time path planning for autonomous underwater vehicle (AUV) is a very difficult and challenging task. Bioinspired neural network (BINN) has been used to deal with this problem for its many distinct advantages: that is, no learning process is needed and realization is also easy. However, there are some shortcomings when BINN is applied to AUV path planning in a three-dimensional (3D) unknown environment, including complex computing problem when the environment is very large and repeated path problem when the size of obstacles is bigger than the detection range of sensors. To deal with these problems, an improved dynamic BINN is proposed in this paper. In this proposed method, the AUV is regarded as the core of the BINN and the size of the BINN is based on the detection range of sensors. Then the BINN will move with the AUV and the computing could be reduced. A virtual target is proposed in the path planning method to ensure that the AUV can move to the real target effectively and avoid big-size obstacles automatically. Furthermore, a target attractor concept is introduced to improve the computing efficiency of neural activities. Finally, some experiments are conducted under various 3D underwater environments. The experimental results show that the proposed BINN based method can deal with the real-time path planning problem for AUV efficiently.

  16. Validation of a method for real time foot position and orientation tracking with Microsoft Kinect technology for use in virtual reality and treadmill based gait training programs.

    Science.gov (United States)

    Paolini, Gabriele; Peruzzi, Agnese; Mirelman, Anat; Cereatti, Andrea; Gaukrodger, Stephen; Hausdorff, Jeffrey M; Della Croce, Ugo

    2014-09-01

    The use of virtual reality for the provision of motor-cognitive gait training has been shown to be effective for a variety of patient populations. The interaction between the user and the virtual environment is achieved by tracking the motion of the body parts and replicating it in the virtual environment in real time. In this paper, we present the validation of a novel method for tracking foot position and orientation in real time, based on the Microsoft Kinect technology, to be used for gait training combined with virtual reality. The validation of the motion tracking method was performed by comparing the tracking performance of the new system against a stereo-photogrammetric system used as gold standard. Foot position errors were in the order of a few millimeters (average RMSD from 4.9 to 12.1 mm in the medio-lateral and vertical directions, from 19.4 to 26.5 mm in the anterior-posterior direction); the foot orientation errors were also small (average %RMSD from 5.6% to 8.8% in the medio-lateral and vertical directions, from 15.5% to 18.6% in the anterior-posterior direction). The results suggest that the proposed method can be effectively used to track feet motion in virtual reality and treadmill-based gait training programs.

  17. Research on virtual Guzheng based on Kinect

    Science.gov (United States)

    Li, Shuyao; Xu, Kuangyi; Zhang, Heng

    2018-05-01

    There are a lot of researches on virtual instruments, but there are few on classical Chinese instruments, and the techniques used are very limited. This paper uses Unity 3D and Kinect camera combined with virtual reality technology and gesture recognition method to design a virtual playing system of Guzheng, a traditional Chinese musical instrument, with demonstration function. In this paper, the real scene obtained by Kinect camera is fused with virtual Guzheng in Unity 3D. The depth data obtained by Kinect and the Suzuki85 algorithm are used to recognize the relative position of the user's right hand and the virtual Guzheng, and the hand gesture of the user is recognized by Kinect.

  18. [3D Virtual Reality Laparoscopic Simulation in Surgical Education - Results of a Pilot Study].

    Science.gov (United States)

    Kneist, W; Huber, T; Paschold, M; Lang, H

    2016-06-01

    The use of three-dimensional imaging in laparoscopy is a growing issue and has led to 3D systems in laparoscopic simulation. Studies on box trainers have shown differing results concerning the benefit of 3D imaging. There are currently no studies analysing 3D imaging in virtual reality laparoscopy (VRL). Five surgical fellows, 10 surgical residents and 29 undergraduate medical students performed abstract and procedural tasks on a VRL simulator using conventional 2D and 3D imaging in a randomised order. No significant differences between the two imaging systems were shown for students or medical professionals. Participants who preferred three-dimensional imaging showed significantly better results in 2D as wells as in 3D imaging. First results on three-dimensional imaging on box trainers showed different results. Some studies resulted in an advantage of 3D imaging for laparoscopic novices. This study did not confirm the superiority of 3D imaging over conventional 2D imaging in a VRL simulator. In the present study on 3D imaging on a VRL simulator there was no significant advantage for 3D imaging compared to conventional 2D imaging. Georg Thieme Verlag KG Stuttgart · New York.

  19. 3D virtual planning in orthognathic surgery and CAD/CAM surgical splints generation in one patient with craniofacial microsomia: a case report

    Science.gov (United States)

    Vale, Francisco; Scherzberg, Jessica; Cavaleiro, João; Sanz, David; Caramelo, Francisco; Maló, Luísa; Marcelino, João Pedro

    2016-01-01

    Objective: In this case report, the feasibility and precision of tridimensional (3D) virtual planning in one patient with craniofacial microsomia is tested using Nemoceph 3D-OS software (Software Nemotec SL, Madrid, Spain) to predict postoperative outcomes on hard tissue and produce CAD/CAM (Computer Aided Design/Computer Aided Manufacturing) surgical splints. Methods: The clinical protocol consists of 3D data acquisition of the craniofacial complex by cone-beam computed tomography (CBCT) and surface scanning of the plaster dental casts. The ''virtual patient'' created underwent virtual surgery and a simulation of postoperative results on hard tissues. Surgical splints were manufactured using CAD/CAM technology in order to transfer the virtual surgical plan to the operating room. Intraoperatively, both CAD/CAM and conventional surgical splints are comparable. A second set of 3D images was obtained after surgery to acquire linear measurements and compare them with measurements obtained when predicting postoperative results virtually. Results: It was found a high similarity between both types of surgical splints with equal fitting on the dental arches. The linear measurements presented some discrepancies between the actual surgical outcomes and the predicted results from the 3D virtual simulation, but caution must be taken in the analysis of these results due to several variables. Conclusions: The reported case confirms the clinical feasibility of the described computer-assisted orthognathic surgical protocol. Further progress in the development of technologies for 3D image acquisition and improvements on software programs to simulate postoperative changes on soft tissue are required. PMID:27007767

  20. Real-time RGB-D image stitching using multiple Kinects for improved field of view

    Directory of Open Access Journals (Sweden)

    Hengyu Li

    2017-03-01

    Full Text Available This article concerns the problems of a defective depth map and limited field of view of Kinect-style RGB-D sensors. An anisotropic diffusion based hole-filling method is proposed to recover invalid depth data in the depth map. The field of view of the Kinect-style RGB-D sensor is extended by stitching depth and color images from several RGB-D sensors. By aligning the depth map with the color image, the registration data calculated by registering color images can be used to stitch depth and color images into a depth and color panoramic image concurrently in real time. Experiments show that the proposed stitching method can generate a RGB-D panorama with no invalid depth data and little distortion in real time and can be extended to incorporate more RGB-D sensors to construct even a 360° field of view panoramic RGB-D image.